Tekko

Language

Get in Touch

Usually respond within 24 hours

Back to BlogBackend

Building Custom MCP Servers: Connect AI Agents to Your Dev Stack

7 min read
MCPAI AgentsTypeScriptLLMArchitecture

For the past year, the primary friction point in using Large Language Models (LLMs) for software engineering hasn't been the intelligence of the models—it's been the data silo. We spend a significant portion of our day acting as a manual human bridge: copying logs from Datadog into a chat window, pasting database schemas into a prompt, or explaining the latest API changes to a model that was trained six months ago.

The Model Context Protocol (MCP), recently introduced by Anthropic, aims to standardize the way we connect these dots. Instead of writing bespoke integrations for every AI tool, MCP provides a universal interface to expose your local and remote data sources to AI agents.

In this guide, we will walk through the architecture of MCP and build a custom server that gives an AI agent direct access to an internal development tool.

The Architecture of Context

MCP is built on a client-server architecture, but it functions differently than the traditional web patterns you might be used to. In the MCP ecosystem, there are three main players:

  1. The Host: This is the AI application (like Claude Desktop or an IDE) that wants to consume data.
  2. The Client: A component within the host that initiates connections to servers.
  3. The Server: A lightweight process that exposes specific capabilities (Resources, Tools, and Prompts) to the client.

The communication happens over a standardized protocol, typically using JSON-RPC 2.0. The beauty of this design is that the server can run locally on your machine (via stdio) or remotely (via SSE/HTTP). This allows an AI agent to interact with your local file system, your private GitHub repos, or even your production database through a secure, controlled interface.

The Three Pillars: Resources, Tools, and Prompts

When building an MCP server, you are essentially defining three types of capabilities:

1. Resources

Resources are the "read-only" data sources. Think of these as the files or documents the AI can browse. A resource could be a local log file, a database table, or a documentation site. They are identified by URIs (e.g., logs://api-server/latest).

2. Tools

Tools are executable functions. This is where the real power lies. A tool allows the LLM to take action in your environment. This could be running a shell command, triggering a CI/CD pipeline, or performing a complex SQL query. Tools have a defined JSON schema for their arguments, allowing the model to understand exactly how to call them.

3. Prompts

Prompts are reusable templates that help the model understand how to interact with your data. Instead of typing a 500-word system prompt every time, you can expose a prompt called analyze-sentry-error that automatically pulls in the relevant context and instructions.

Building a Custom MCP Server

Let's get practical. We're going to build an MCP server in TypeScript that allows an AI agent to interact with an internal SQLite database used for tracking microservice deployments. This gives the agent the ability to answer questions like "When was the auth-service last deployed to staging?" without us having to look it up.

Step 1: Setting Up the Environment

First, initialize a new Node.js project and install the MCP SDK:

mkdir deployment-mcp-server cd deployment-mcp-server npm init -y npm install @modelcontextprotocol/sdk zod sqlite3 npm install -D @types/node @types/sqlite3 typescript

Initialize your tsconfig.json to support modern ESM:

{ "compilerOptions": { "target": "ES2022", "module": "NodeNext", "outDir": "build", "strict": true } }

Step 2: Implementing the Server

Create an index.ts file. We'll start by importing the necessary classes and defining our server instance.

import { Server } from "@modelcontextprotocol/sdk/server/index.js"; import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; import { CallToolRequestSchema, ListToolsRequestSchema } from "@modelcontextprotocol/sdk/types.js"; import { z } from "zod"; import sqlite3 from "sqlite3"; import { promisify } from "util"; // Initialize the database const db = new sqlite3.Database("deployments.db"); const dbQuery = promisify(db.all).bind(db); const server = new Server( { name: "deployment-tracker", version: "1.0.0", }, { capabilities: { tools: {}, }, } );

Step 3: Defining a Tool

Now, we need to tell the MCP host what tools are available. We'll implement a tool called get_deployment_history.

server.setRequestHandler(ListToolsRequestSchema, async () => { return { tools: [ { name: "get_deployment_history", description: "Get the recent deployment history for a specific service", inputSchema: { type: "object", properties: { serviceName: { type: "string" }, limit: { type: "number", default: 5 }, }, required: ["serviceName"], }, }, ], }; });

Step 4: Handling the Tool Call

When the AI decides to use this tool, the host will send a request. We need to handle that request, query our database, and return the result.

server.setRequestHandler(CallToolRequestSchema, async (request) => { if (request.params.name === "get_deployment_history") { const { serviceName, limit } = request.params.arguments as { serviceName: string; limit: number }; try { const rows = await dbQuery( "SELECT * FROM deployments WHERE service_name = ? ORDER BY timestamp DESC LIMIT ?", [serviceName, limit] ); return { content: [ { type: "text", text: JSON.stringify(rows, null, 2), }, ], }; } catch (error) { return { content: [{ type: "text", text: `Error: ${error}` }], isError: true, }; } } throw new Error("Tool not found"); });

Step 5: Connecting the Transport

Finally, we connect the server to the standard input/output transport. This allows the host (like Claude Desktop) to spawn our server as a child process.

async function main() { const transport = new StdioServerTransport(); await server.connect(transport); console.error("Deployment MCP Server running on stdio"); } main().catch(console.error);

Security and Permissions: The Senior Engineer's Concern

When you give an LLM access to your tools, you are essentially granting it your permissions. If your MCP server has write access to your production database, and the LLM decides to run a DROP TABLE, it will happen.

Here are three non-negotiable practices for custom MCP servers:

  1. Read-Only by Default: Unless the tool specifically requires writing data, use read-only credentials for your database or API integrations.
  2. Input Validation: Use libraries like Zod to strictly validate the arguments the LLM passes to your tools. Never trust the LLM to provide perfectly formatted data.
  3. Human-in-the-Loop: For sensitive operations (like deploying code or deleting data), design the host environment to require manual approval before the MCP tool execution completes.

Integrating with the Host

To use this server with Claude Desktop, you would add it to your claude_desktop_config.json file:

{ "mcpServers": { "deployments": { "command": "node", "args": ["/path/to/your/build/index.js"] } } }

Once restarted, Claude will show a small "plug" icon. You can now ask: "What's the status of the last three deployments for the payment-gateway?" Claude will automatically see that it has a tool for this, call your local server, receive the JSON, and summarize the answer for you.

Why This Matters for the Future of DevTools

We are moving away from "Chatbots" and toward "Agentic Workflows." The limitation of early AI assistants was their isolation. They were like brilliant engineers locked in a room with no internet and no keyboard, forced to communicate only through a slot in the door.

MCP provides the keyboard. By building custom MCP servers, you are creating a specialized interface for your company's proprietary knowledge. Instead of training a model on your code (which is expensive and has privacy implications), you are providing it with a "real-time window" into your existing systems.

Actionable Conclusion

To start implementing MCP in your workflow:

  1. Audit your context switches: Identify the data you copy-paste most frequently into AI tools.
  2. Start small with a Read-Only Resource: Build a simple MCP server that exposes your project's internal documentation or API specs.
  3. Standardize your interfaces: Use the @modelcontextprotocol/sdk to ensure your servers are compatible with any host that adopts the protocol.
  4. Secure your tools: Implement strict schema validation and use restricted service accounts for any MCP-driven database or API access.

The goal is to stop teaching the AI about your stack and start giving it the tools to explore it itself.