Introducing MCP Tools for AI Agents

Most MCP servers get it wrong. They dump massive tool descriptions, return bloated responses, and eat through your context window before the agent even starts working. We built ApiCrate's MCP server to do the opposite.

The Problem with MCP Servers Today

MCP is powerful, but the ecosystem has a context problem. Connect a few MCP servers to Claude or Cursor, and watch what happens:

  • Tool descriptions alone consume thousands of tokens. Some servers register dozens of tools with verbose docstrings, parameter descriptions, and usage examples baked into the schema. The LLM has to process all of this on every turn — even for tools it won't use.
  • Responses are kitchen-sink JSON. Ask for one field, get fifty. A simple country lookup returns the entire ISO record, every subdivision, every currency variant. The agent then has to extract what it needs from a wall of data.
  • No cost awareness. The agent has no idea whether a tool call is cheap or expensive. It might burn through your daily quota on a single bulk operation because nothing told it to batch carefully.
  • Flat tool lists with no structure. Twenty tools dumped in a flat list. The LLM has to read every description to figure out which tool does what. Related tools (like single vs. bulk variants) aren't grouped or cross-referenced.

The result? Your 200k context window fills up fast. The agent's reasoning quality degrades. And you pay for tokens that carry data nobody asked for.

How ApiCrate's MCP Server Is Different

We designed our MCP server around a simple principle: every token should earn its place in the context window.

Concise Tool Descriptions

Each tool has a one-line description and a cost annotation. That's it. No usage examples, no verbose parameter docs, no walls of text.

apicrate-lookup-country
  "Look up an ISO 3166 country by alpha-2, alpha-3, or numeric code. Cost: 1 credit."

apicrate-find-nearby-postal-codes
  "Find postal codes within a radius of a geographic point. Cost: 5 credits."

The LLM knows what the tool does and what it costs. The parameter schema (provided by MCP itself) handles the rest. No redundant documentation in the description.

Field Filtering on Every Tool

Most of our tools support a fields parameter. Instead of returning everything and hoping the agent can find what it needs:

{
  "tool": "apicrate-lookup-country",
  "input": {"code": "JP", "fields": "name,alpha_2,region"}
}

Response:

{
  "name": "Japan",
  "capital": "Tokyo",
  "region": "Asia"
}

Three fields instead of forty. Less data in the context. Faster agent reasoning. Lower token cost.

Variable Credit Costs

Not all operations are equal. A simple hash costs 1 credit. A spatial PostGIS query costs 5. Password hashing with argon2 costs 2. Our MCP tools declare their cost in the description so the agent can make informed decisions:

Cost Examples
1 credit Country lookup, postal code validation, timezone info
2 credits UA parsing, password hashing, postal code lookup
3 credits Country search, postal code search, Bible search
4 credits Email risk check
5 credits Nearby postal codes, IP geolocation

An agent building a multi-step workflow can estimate total cost before executing. No surprises.

Bulk Tools for Batch Operations

When an agent needs to process a list, calling a tool 100 times wastes context on 100 request/response pairs. Our bulk tools handle this in one call:

  • apicrate-parse-user-agents-bulk — up to 100 UAs at once (1 credit each vs. 2 for single)
  • apicrate-check-email-risk-bulk — up to 10 emails in one call
  • apicrate-validate-postal-codes-bulk — up to 50 codes per request

One tool call, one response, minimal context overhead.

Composable Tool Chains

Tools are designed to chain naturally. An agent working with IP data might:

  1. apicrate-geolocate-ip — get country code and coordinates from an IP
  2. apicrate-lookup-country — get country details from the code
  3. apicrate-find-nearby-postal-codes — find postal codes near those coordinates

Each tool returns just what the next tool needs. No intermediate bloat.

21 Tools Across 8 Domains

Domain Tools What They Do
Countries 3 Lookup, search, batch validate ISO codes
Postal Codes 7 Lookup, validate, search, nearby/spatial, bulk validate
User Agents 2 Single + bulk UA parsing with spoofing detection
Hashing 2 Digest hashing (MD5/SHA) + password hashing (bcrypt/argon2)
Email Risk 2 Single + bulk disposable/fraud detection
IP Geolocation 1 City-level IP geolocation
Timezones 2 Timezone info + time conversion with DST
Bible 2 Verse lookup + full-text search

Getting Started

Add ApiCrate to your Claude Desktop config:

{
  "mcpServers": {
    "apicrate": {
      "command": "npx",
      "args": ["-y", "mcp-remote", "https://api.apicrate.io/mcp/sse"],
      "env": {
        "API_KEY": "YOUR_APICRATE_KEY"
      }
    }
  }
}

Works with Claude Desktop, Claude Code, Cursor, Windsurf, and any MCP-compatible client. Sign up for a free API key — the free tier gives you 100 MCP credits per day to try everything out.

Build MCP Servers That Respect the Context Window

If you're building your own MCP server, here's what we learned:

  1. Keep tool descriptions under 100 characters. The LLM reads every tool description on every turn. Respect that.
  2. Support field filtering. Let callers ask for only what they need. Don't return your entire data model by default.
  3. Declare costs upfront. If your tools have variable costs, say so in the description. Agents can't budget what they can't see.
  4. Offer bulk variants. One call with 100 items beats 100 calls with 1 item — especially in a context window.
  5. Return structured, flat JSON. Deeply nested responses are harder for LLMs to parse. Keep it shallow.
  6. Fail explicitly. Don't return success with an error message buried in the data. Return a clear error so the agent can retry or adjust.

The MCP ecosystem is growing fast. The servers that win will be the ones that treat context like the scarce resource it is.