Back to Blog
AI/ML

MCP (Model Context Protocol) Explained with Real Examples

Understanding Anthropic's Model Context Protocol and how to implement it for seamless AI integrations.

Muhammad Ali

Muhammad Ali

AI Solutions Engineer & CTO

January 18, 2026 8 min

What is MCP?

The Model Context Protocol (MCP) is an open standard created by Anthropic that enables AI assistants to securely connect with external data sources and tools. Think of it as a universal adapter that allows LLMs to interact with your databases, APIs, file systems, and other services in a standardized way.

Before MCP, every AI integration required custom code. Want your AI to query a database? Write custom code. Want it to read files? More custom code. Want it to interact with Slack? You get the idea. MCP changes this by providing a single protocol that any AI can use to interact with any compatible service.

MCP is to AI what USB was to hardware. Before USB, every device needed its own connector. MCP creates a universal interface for AI-to-service communication.

Why MCP Matters

At Fyncall, we integrated MCP early and it transformed our architecture. Here's why it matters:

1. Standardization

Instead of building custom integrations for each tool, we build MCP servers once and they work with any MCP-compatible AI client (Claude, custom agents, etc.).

2. Security

MCP has built-in security primitives:

  • Capability-based permissions
  • Explicit tool exposure (AI only sees what you expose)
  • Audit logging of all operations

3. Composability

You can mix and match MCP servers. Your AI can use a database server, a file server, and an API server simultaneously, all through the same protocol.

Core Concepts

MCP has three main primitives:

1. Resources

Resources are data that the AI can read. They're exposed via URIs:

// Example resources
database://customers/123       // A customer record
file:///documents/report.pdf  // A local file
api://salesforce/deals        // External API data

2. Tools

Tools are actions the AI can perform. Each tool has a schema defining its inputs and outputs:

{
  "name": "send_email",
  "description": "Send an email to a customer",
  "inputSchema": {
    "type": "object",
    "properties": {
      "to": { "type": "string", "format": "email" },
      "subject": { "type": "string" },
      "body": { "type": "string" }
    },
    "required": ["to", "subject", "body"]
  }
}

3. Prompts

Prompts are reusable templates that combine resources and instructions:

{
  "name": "customer_support",
  "description": "Handle a customer support request",
  "arguments": [
    { "name": "customer_id", "required": true }
  ]
}

Building an MCP Server

Let's build a practical MCP server for customer data. This is based on what we actually use at Fyncall:

Step 1: Setup

npm init -y
npm install @modelcontextprotocol/sdk zod

Step 2: Define the Server

import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

// Create the server
const server = new Server({
  name: "customer-data-server",
  version: "1.0.0"
}, {
  capabilities: {
    resources: {},
    tools: {}
  }
});

// Simulated customer database
const customers = new Map([
  ["cust_123", {
    id: "cust_123",
    name: "Acme Corp",
    email: "contact@acme.com",
    plan: "enterprise",
    orders: [
      { id: "ord_1", total: 5000, status: "completed" },
      { id: "ord_2", total: 3200, status: "pending" }
    ]
  }],
]);

Step 3: Expose Resources

// List available resources
server.setRequestHandler("resources/list", async () => {
  const resources = [];
  for (const [id, customer] of customers) {
    resources.push({
      uri: `customer://${id}`,
      name: customer.name,
      mimeType: "application/json"
    });
  }
  return { resources };
});

// Read a specific resource
server.setRequestHandler("resources/read", async (request) => {
  const uri = request.params.uri;
  const customerId = uri.replace("customer://", "");
  const customer = customers.get(customerId);

  if (!customer) {
    throw new Error(`Customer not found: ${customerId}`);
  }

  return {
    contents: [{
      uri,
      mimeType: "application/json",
      text: JSON.stringify(customer, null, 2)
    }]
  };
});

Step 4: Expose Tools

// List available tools
server.setRequestHandler("tools/list", async () => {
  return {
    tools: [
      {
        name: "update_customer",
        description: "Update customer information",
        inputSchema: {
          type: "object",
          properties: {
            customer_id: { type: "string" },
            updates: {
              type: "object",
              properties: {
                email: { type: "string" },
                plan: { type: "string", enum: ["free", "pro", "enterprise"] }
              }
            }
          },
          required: ["customer_id", "updates"]
        }
      },
      {
        name: "process_refund",
        description: "Process a refund for an order",
        inputSchema: {
          type: "object",
          properties: {
            order_id: { type: "string" },
            amount: { type: "number" },
            reason: { type: "string" }
          },
          required: ["order_id", "amount", "reason"]
        }
      }
    ]
  };
});

// Handle tool calls
server.setRequestHandler("tools/call", async (request) => {
  const { name, arguments: args } = request.params;

  switch (name) {
    case "update_customer": {
      const customer = customers.get(args.customer_id);
      if (!customer) {
        return { content: [{ type: "text", text: "Customer not found" }] };
      }
      Object.assign(customer, args.updates);
      return {
        content: [{
          type: "text",
          text: `Updated customer ${args.customer_id}`
        }]
      };
    }

    case "process_refund": {
      // In production, this would integrate with payment provider
      return {
        content: [{
          type: "text",
          text: `Refund of $${args.amount} processed for order ${args.order_id}`
        }]
      };
    }

    default:
      throw new Error(`Unknown tool: ${name}`);
  }
});

Step 5: Run the Server

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("Customer data MCP server running");
}

main().catch(console.error);

Real-World Example: Fyncall Integration

At Fyncall, we have multiple MCP servers powering our AI agents:

1. Customer Database Server

Exposes customer data, order history, and ticket information. Agents can read customer context before responding.

2. Policy Engine Server

Exposes business rules as resources. The AI can query what actions are allowed for specific scenarios.

3. Action Server

Provides tools for executing actions: sending emails, issuing refunds, updating tickets, escalating to humans.

The Flow

Customer Message
      │
      ▼
┌─────────────────┐
│  Claude Agent   │
└────────┬────────┘
         │ MCP Protocol
    ┌────┴────┬────────────┐
    ▼         ▼            ▼
┌───────┐ ┌───────┐ ┌──────────┐
│Customer│ │Policy │ │ Action   │
│  DB   │ │Engine │ │ Server   │
│Server │ │Server │ │          │
└───────┘ └───────┘ └──────────┘

The AI agent can:

  1. Read customer context from the DB server
  2. Check allowed actions from the Policy server
  3. Execute approved actions via the Action server

All through the same MCP protocol, all audited, all secure.

Best Practices

1. Principle of Least Privilege

Only expose what's necessary. If your AI only needs to read orders, don't expose customer deletion tools.

2. Validate Everything

Use Zod or similar for input validation. Never trust AI-generated inputs without validation.

3. Audit Logging

Log every tool call with context. You need to know what the AI did and why.

server.setRequestHandler("tools/call", async (request) => {
  const { name, arguments: args } = request.params;

  // Log before execution
  await auditLog.write({
    timestamp: new Date(),
    tool: name,
    arguments: args,
    conversationId: request.meta?.conversationId
  });

  // Execute tool...
});

4. Rate Limiting

Implement rate limits at the MCP server level to prevent runaway AI loops.

5. Error Handling

Return meaningful errors that help the AI self-correct:

// Bad
throw new Error("Error");

// Good
throw new Error(
  "Cannot process refund: order_id 'xyz' not found. " +
  "Valid order IDs start with 'ord_'. Check the customer's order history."
);

Conclusion

MCP is a game-changer for AI integrations. Instead of building custom integrations for every service, you build MCP servers once and get interoperability for free.

At Fyncall, MCP has allowed us to:

  • Reduce integration code by 60%
  • Add new data sources in hours instead of days
  • Maintain consistent security across all AI interactions
  • Audit everything the AI does

If you're building AI systems that need to interact with external services, MCP should be at the top of your list.


Want to discuss MCP implementations or have questions about the patterns described here? Reach out on LinkedIn or email me.

Found this helpful? Share it with others!

MA

Written by Muhammad Ali

AI Solutions Engineer & CTO building multi-agent systems and full-stack architectures. Currently leading engineering at Fyncall and Builderson Group.