When I started building MCP servers for CMS platforms, I built the WordPress one first. It took about three weeks: REST API integration, Zod schemas for every tool input, TypeScript types, a build pipeline, a README. Then I started the Shopify one and realized I was going to repeat everything.
Twelve CMS platforms. If I did them independently, I'd have twelve separate repos, twelve separate CI pipelines, twelve separate versioning strategies, and twelve places where a bug in the shared authentication logic would need to be fixed. CMS MCP Hub was the answer: one monorepo, one pipeline, shared core, 589 tools.
Here's how it's built.
The Problem: Every CMS Needs Its Own MCP Server
Model Context Protocol lets you expose tools that an LLM (Claude, GPT-4, etc.) can call. An MCP server for WordPress might expose create_post, update_page, list_plugins. An MCP server for Shopify exposes create_product, update_inventory, list_orders. The concepts differ but the plumbing is identical:
- Parse tool call input with Zod
- Authenticate against the CMS API
- Make HTTP requests to the right endpoint
- Transform the response into something the LLM can read
- Handle errors without crashing the MCP session
The platforms in scope: WordPress, Shopify, Ghost, Strapi, Webflow, Contentful, Prismic, Sanity, Directus, Payload CMS, Hygraph (GraphQL CMS), and KeystoneJS. Each has a completely different API REST, GraphQL, custom SDKs. But all share the same MCP server structure.
Monorepo Setup with Turborepo
Turborepo handles the build orchestration. The workspace structure:
text
cms-mcp-hub/
├── packages/
│ ├── core/ # @cms-mcp-hub/core shared utilities
│ ├── wordpress/ # @cms-mcp-hub/wordpress 89 tools
│ ├── shopify/ # @cms-mcp-hub/shopify 67 tools
│ ├── ghost/ # @cms-mcp-hub/ghost 45 tools
│ ├── strapi/ # @cms-mcp-hub/strapi 52 tools
│ ├── webflow/ # @cms-mcp-hub/webflow 48 tools
│ ├── contentful/ # @cms-mcp-hub/contentful 51 tools
│ ├── prismic/ # @cms-mcp-hub/prismic 41 tools
│ ├── sanity/ # @cms-mcp-hub/sanity 43 tools
│ ├── directus/ # @cms-mcp-hub/directus 55 tools
│ ├── payload/ # @cms-mcp-hub/payload 44 tools
│ ├── hygraph/ # @cms-mcp-hub/hygraph 38 tools
│ └── keystonejs/ # @cms-mcp-hub/keystonejs 36 tools
├── turbo.json
├── package.json # workspace root
└── tsconfig.base.json
package.json (root)
{
"private": true,
"workspaces": ["packages/*"],
"scripts": {
"build": "turbo run build",
"dev": "turbo run dev --parallel",
"lint": "turbo run lint",
"type-check": "turbo run type-check",
"test": "turbo run test"
},
"devDependencies": {
"turbo": "^2.0.0",
"typescript": "^5.4.0",
"tsup": "^8.0.0"
}
}
turbo.json
{
"$schema": "https://turbo.build/schema.json",
"tasks": {
"build": {
"dependsOn": ["^build"], // build dependencies first (core before everything else)
"outputs": ["dist/**"]
},
"dev": {
"cache": false,
"persistent": true
},
"lint": {
"outputs": []
},
"type-check": {
"dependsOn": ["^build"],
"outputs": []
}
}
}
The ^build dependency declaration is the key insight. It tells Turborepo that before building wordpress, it must build core first. Turbo figures out the DAG automatically from workspace dependencies.
The Core Package: Shared Foundation
@cms-mcp-hub/core exports the utilities every CMS package imports:
packages/core/src/index.ts
// Universal REST gateway authenticate once, route anywhere
export { createCmsGateway } from "./gateway";
// Base error types
export { CmsAuthError, CmsApiError, CmsValidationError } from "./errors";
// Shared Zod utilities
export { paginationSchema, dateRangeSchema, slugSchema } from "./schemas";
// MCP server factory
export { createMcpServer } from "./server";
// Logger
export { createLogger } from "./logger";
// Type utilities
export type { CmsConfig, ToolResult, PaginatedResult } from "./types";
The Universal REST Gateway
Every CMS package instantiates a gateway with its credentials. The gateway handles authentication, rate limiting, retry logic, and response normalization once, in core, for everyone.
packages/core/src/gateway.ts
import { z } from "zod";
export interface GatewayConfig {
baseUrl: string;
auth:
| { type: "bearer"; token: string }
| { type: "basic"; username: string; password: string }
| { type: "api-key"; header: string; key: string };
timeout?: number;
retries?: number;
}
export function createCmsGateway(config: GatewayConfig) {
const headers = buildAuthHeaders(config.auth);
async function request<T>(
method: "GET" | "POST" | "PUT" | "PATCH" | "DELETE",
path: string,
body?: unknown
): Promise<T> {
const url = `${config.baseUrl}${path}`;
const maxRetries = config.retries ?? 3;
let lastError: Error | undefined;
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const res = await fetch(url, {
method,
headers: { "Content-Type": "application/json", ...headers },
body: body ? JSON.stringify(body) : undefined,
signal: AbortSignal.timeout(config.timeout ?? 10_000),
});
if (!res.ok) {
const text = await res.text();
throw new CmsApiError(res.status, text);
}
return (await res.json()) as T;
} catch (err) {
lastError = err as Error;
if (err instanceof CmsApiError && err.status < 500) throw err; // don't retry 4xx
if (attempt < maxRetries - 1) await sleep(Math.pow(2, attempt) * 100);
}
}
throw lastError;
}
return { get: <T>(p: string) => request<T>("GET", p),
post: <T>(p: string, b: unknown) => request<T>("POST", p, b),
put: <T>(p: string, b: unknown) => request<T>("PUT", p, b),
patch: <T>(p: string, b: unknown) => request<T>("PATCH", p, b),
delete: <T>(p: string) => request<T>("DELETE", p) };
}
function buildAuthHeaders(auth: GatewayConfig["auth"]): Record<string, string> {
switch (auth.type) {
case "bearer":
return { Authorization: `Bearer ${auth.token}` };
case "basic":
return { Authorization: `Basic ${btoa(`${auth.username}:${auth.password}`)}` };
case "api-key":
return { [auth.header]: auth.key };
}
}
const sleep = (ms: number) => new Promise((r) => setTimeout(r, ms));
Zod Schemas: The Key to LLM-Friendly Tools
Every tool input is validated with Zod before touching the API. But there's a subtlety: Zod's .describe() method on fields is how the LLM knows what to pass. The MCP protocol serializes your Zod schema into JSON Schema, and Claude reads those descriptions to figure out what each argument means.
packages/wordpress/src/tools/posts.ts
import { z } from "zod";
import { createCmsGateway } from "@cms-mcp-hub/core";
// The .describe() calls are what Claude actually reads
export const createPostSchema = z.object({
title: z.string()
.min(1)
.max(200)
.describe("The post title. Displayed as the H1 and used in the URL slug."),
content: z.string()
.describe("Post body in HTML or Gutenberg block JSON. Plain text is also accepted."),
status: z.enum(["draft", "publish", "private", "pending"])
.default("draft")
.describe("Publication status. Use 'draft' unless the user explicitly asks to publish."),
categories: z.array(z.number().int())
.optional()
.describe("Array of category IDs. Use the list_categories tool first to get valid IDs."),
tags: z.array(z.number().int())
.optional()
.describe("Array of tag IDs. Use list_tags tool to get valid IDs."),
featured_media: z.number().int().optional()
.describe("Media attachment ID for the featured image. Use upload_media first."),
meta: z.record(z.unknown()).optional()
.describe("Custom field key-value pairs for plugins like ACF or Yoast SEO."),
});
export type CreatePostInput = z.infer<typeof createPostSchema>;
export async function createPost(
gateway: ReturnType<typeof createCmsGateway>,
input: CreatePostInput
) {
// Zod already validated trust the types
const post = await gateway.post<{ id: number; link: string }>("/wp/v2/posts", input);
return {
success: true,
postId: post.id,
url: post.link,
message: `Post created with ID ${post.id}`,
};
}
TipThe .describe() pattern is critical. Without descriptions, Claude has to guess what featured_media means. With descriptions, it knows to call upload_media first and use the returned ID. Good Zod descriptions are what separate a usable MCP tool from a frustrating one.
The MCP Server Factory
Each package calls createMcpServer from core with its tool list. The factory handles the MCP protocol boilerplate tool listing, tool execution, error serialization so each CMS package just exports an array of tool definitions.
packages/wordpress/src/index.ts
import { createMcpServer, createCmsGateway } from "@cms-mcp-hub/core";
import { createPostSchema, createPost } from "./tools/posts";
import { updatePostSchema, updatePost } from "./tools/posts";
import { listPostsSchema, listPosts } from "./tools/posts";
// ... import all 89 tools
const gateway = createCmsGateway({
baseUrl: process.env.WP_BASE_URL!,
auth: {
type: "basic",
username: process.env.WP_USERNAME!,
password: process.env.WP_APP_PASSWORD!,
},
});
createMcpServer({
name: "cms-mcp-hub-wordpress",
version: "1.0.0",
tools: [
{
name: "create_post",
description: "Create a new WordPress post or custom post type",
inputSchema: createPostSchema,
handler: (input) => createPost(gateway, input),
},
{
name: "update_post",
description: "Update an existing post by ID",
inputSchema: updatePostSchema,
handler: (input) => updatePost(gateway, input),
},
{
name: "list_posts",
description: "List posts with filtering, pagination, and search",
inputSchema: listPostsSchema,
handler: (input) => listPosts(gateway, input),
},
// ... 86 more
],
}).start();
tsup for Builds: Dual CJS/ESM Output
Each package uses tsup to produce both CommonJS and ESM builds with declaration files. This matters because different MCP clients have different module expectations.
packages/wordpress/tsup.config.ts
import { defineConfig } from "tsup";
export default defineConfig({
entry: ["src/index.ts"],
format: ["cjs", "esm"],
dts: true, // generate .d.ts declaration files
splitting: false, // single output file per format
sourcemap: true,
clean: true,
external: [ // don't bundle the SDK let the consumer provide it
"@modelcontextprotocol/sdk",
"zod",
],
esbuildOptions(options) {
options.platform = "node";
},
});
How 12 Platforms × ~49 Tools = 589
Each CMS package has a consistent tool surface organized into resource types. WordPress has the most because it's the most extensible; simpler platforms like KeystoneJS have fewer:
- WordPress 89 tools (posts, pages, media, taxonomies, users, comments, plugins, themes, options, custom post types, WooCommerce)
- Shopify 67 tools (products, variants, inventory, orders, customers, collections, metafields, webhooks)
- Strapi 52 tools (content types, entries, media, components, roles, users)
- Directus 55 tools (items, collections, files, flows, permissions, users, relations)
- Contentful 51 tools (entries, assets, content types, locales, webhooks, tags)
- Ghost 45 tools (posts, pages, tags, members, newsletters, offers, tiers)
- Webflow 48 tools (collections, items, pages, assets, forms, sites, webhooks)
- Sanity 43 tools (documents, assets, datasets, projects, users, webhooks)
- Payload 44 tools (collections, globals, media, users, preferences, versions)
- Prismic 41 tools (documents, custom types, releases, assets, environments)
- Hygraph 38 tools (content entries, models, enumerations, stages, locales)
- KeystoneJS 36 tools (lists, items, files, users, sessions, schema introspection)
Turborepo Caching Saved Hours
Turbo caches build outputs by content hash. If I change only the WordPress package, a turbo run build will rebuild WordPress but use cached output for the other 11 packages. On a full build from scratch, all 12 packages take about 4 minutes. With the cache, touching one package takes 15 seconds.
bash
# First build: all 12 packages compile fresh
turbo run build
# Tasks: 13 total, 0 cached, 13 completed (4m 12s)
# After editing only the wordpress package:
turbo run build
# Tasks: 13 total, 12 cached, 1 completed (14s)
# FULL TURBO (for the 12 unchanged packages)
Open Source Strategy
MIT license. The rationale: MCP servers are infrastructure. Infrastructure that's locked behind a commercial license gets ignored. The goal is adoption developers adding these to their Claude Desktop configs, not paying for a SaaS.
What actually drives contributions: a clear README with copy-paste config examples, an examples/ directory showing real Claude prompts and what tools they invoke, and a CONTRIBUTING guide that explains the architecture so adding a new tool takes 30 minutes, not 3 hours.
NoteThe biggest lesson from open source: people don't read long READMEs. The first 10 lines need to answer 'what does this do' and 'how do I install it'. Everything else is reference documentation.
What I'd Do Differently
- Start with the schema layer I built 3 packages before establishing the Zod schema conventions. Retrofitting consistent
.describe() patterns across 150 tools was painful. - Integration tests from day one each tool should have a test against a real CMS instance (or a mock that matches the API contract). I added tests late and caught schema mismatches I'd shipped.
- Versioning strategy deciding whether to version packages independently or lock-step is harder than it sounds. I use lock-step (all packages share the same version) because independent versioning of 12 packages is overhead I don't want.