Building an MCP Server So You Can Shop From Claude
Every MCP server tutorial shows you the same thing: a calculator, a weather API, maybe a database query tool. Useful for demos. Boring in practice.
We built one that lets you buy a t-shirt without leaving your terminal.
@ultrathink-art/mcp-server is a TypeScript MCP server that exposes our entire shopping flow — browse, product details, cart management, checkout — as tools Claude can call. You say "show me stickers," and Claude calls ultrathink_browse({ category: "stickers" }). You say "add the large to my cart," and it calls ultrathink_cart_add with the right product and size IDs.
Here's how we built it and the design decisions that matter.
Architecture: Three Layers, No Magic
Claude Code (MCP Client)
↓ stdio
Ultrathink MCP Server (Node.js)
↓ HTTP
Rails Backend (ultrathink.art)
↓
SQLite
The MCP server is a standalone Node.js process. Claude Code spawns it, communicates over stdin/stdout using the Model Context Protocol, and the server translates tool calls into HTTP requests against our Rails API.
Why not build MCP directly into Rails? Two reasons. The MCP SDK is TypeScript-native — fighting that would mean maintaining a Ruby MCP implementation from scratch. And separation of concerns actually matters here: the MCP server is a thin translation layer. It knows about tool schemas and session management. Rails knows about products, carts, and payments. Neither needs to understand the other's domain.
Six Tools, One Shopping Flow
The full tool set:
ultrathink_browse // List categories, or products in a category
ultrathink_product // Detailed product info (sizes, pricing, images)
ultrathink_cart_add // Add item to cart (with optional size_id)
ultrathink_cart_view // Current cart contents
ultrathink_cart_remove // Remove item from cart
ultrathink_checkout // Collect shipping info, get Stripe payment URL
The interesting design choice: ultrathink_browse does double duty. Call it with no arguments and you get categories. Call it with { category: "clothing" } and you get filtered products. One tool, two behaviors. This keeps the tool count low — LLMs perform better with fewer, well-described tools than with many narrow ones.
Each tool definition includes an inputSchema that tells Claude exactly what parameters exist and which are required:
{
name: "ultrathink_cart_add",
description: "Add a product to your cart. Include size_id for products that require size selection.",
inputSchema: {
type: "object",
properties: {
product_id: { type: "number", description: "Product ID to add" },
quantity: { type: "number", description: "Quantity (default: 1)" },
size_id: { type: "number", description: "Size ID (required for sized products)" }
},
required: ["product_id"]
}
}
The description field is doing real work. "Include size_id for products that require size selection" teaches Claude the conditional requirement without adding validation logic to the schema itself. The product detail response includes requires_size: true — Claude reads that, then knows to ask the user for a size before calling cart_add.
Session Persistence via File
Cart state needs to survive across Claude Code sessions. Our solution is deliberately simple:
const sessionPath = join(homedir(), ".ultrathink-session");
function getSessionId(): string {
if (existsSync(sessionPath)) {
return readFileSync(sessionPath, "utf-8").trim();
}
const sessionId = `mcp_${Date.now()}_${Math.random().toString(36).slice(2)}`;
writeFileSync(sessionPath, sessionId);
return sessionId;
}
First launch: generate a session ID, write it to ~/.ultrathink-session. Every subsequent request sends that ID as X-Test-Session-ID header. Rails maps it to a cart the same way it handles browser sessions.
The alternative was per-process sessions (each Claude window gets a fresh cart). Shared session won — if you add something to your cart in one window, you see it in another. That matches how web browsers work and avoids the "where did my cart go?" confusion.
PII Sanitization: What the LLM Never Sees
This is the part most MCP tutorials skip entirely. When your MCP server handles checkout, it processes customer PII — names, addresses, emails. That data flows through the LLM's context window.
We strip it before it gets there:
const PII_FIELDS = [
"email", "shipping_name", "shipping_address",
"shipping_city", "shipping_state", "shipping_zip",
"shipping_country", "billing_name", "billing_address",
"billing_city", "billing_state", "billing_zip", "billing_country"
];
function sanitizeResponse(data: unknown): unknown {
if (Array.isArray(data)) return data.map(sanitizeResponse);
if (!data || typeof data !== "object") return data;
const sanitized: Record<string, unknown> = {};
for (const [key, value] of Object.entries(data)) {
if (PII_FIELDS.includes(key)) continue;
sanitized[key] = typeof value === "object"
? sanitizeResponse(value) : value;
}
return sanitized;
}
Every API response passes through sanitizeResponse before reaching Claude. The checkout tool accepts PII as input (Claude needs to collect the address from the user), but the response only contains checkout_url, order_number, and total. The address goes to Rails, not back into context.
This is table stakes for any MCP server handling real customer data. LLM providers improve data handling practices regularly, but defense in depth means you don't depend on any single layer.
The Checkout Handoff
Payment is the one step that leaves the terminal. The MCP server POSTs to a dedicated checkout endpoint, Rails generates a Stripe checkout session, and the response includes a payment URL:
{
"success": true,
"checkout_url": "https://checkout.stripe.com/c/pay/cs_live_...",
"order_number": "UT-1042",
"total": 24.99
}
Claude presents this to the user. They click the link, complete payment in Stripe's hosted checkout, and the webhook flow handles the rest — payment confirmation, order fulfillment, shipping notification.
We considered terminal-native payment (Stripe CLI integration, raw card input), but the security implications are severe. Hosted checkout means we never touch card numbers. PCI compliance stays with Stripe.
Installation: One Config Change
{
"mcpServers": {
"ultrathink": {
"command": "npx",
"args": ["-y", "@ultrathink-art/mcp-server"]
}
}
}
Add this to your Claude Code MCP config. npx -y installs and runs in one shot. No global install, no build step, no configuration beyond the JSON block.
The server connects to ultrathink.art by default. Override with ULTRATHINK_API_URL for development.
What We Learned
Tool descriptions matter more than schemas. Claude reads the description to understand when to call a tool. A precise description ("Browse ultrathink.art products. Call without arguments to see categories") outperforms a vague one ("Get products") even if the schema is identical.
Fewer tools, smarter tools. We started with eight tools (separate list_categories and list_products). Merging them into ultrathink_browse reduced confusion. LLMs pick from a smaller menu faster.
Session files beat in-memory state. MCP servers are processes that get spawned and killed. Anything in memory dies with the process. File-based persistence is the right default for local-first tools.
Sanitize aggressively. The MCP protocol doesn't have a concept of "sensitive fields." That's your problem. Strip PII from responses, log what you send, and treat the context window as semi-public.
The package is live on npm: @ultrathink-art/mcp-server. The source lives at mcp/ in our repo. If you're building an MCP server for anything beyond a demo, the patterns here — dual-purpose tools, file-based sessions, response sanitization — transfer directly.