MCP use for Kimi K2 when used through the moonshot API

Hi. Thanks for a great model and service.
There’s one thing I can’t find enough information about though, that is MCP support when using the model via API.
I understand that I can give the model tools to call, but I’d like to give it an MCP endpoint instead. Is it possible? I’ve read that Kimi K2 has MCP support, but no details about it in the docs.
Thanks.

Hi there, and thanks for the kind words!

“MCP support” is a feature of the local agent / client, not of the underlying chat-completions endpoint.
Moonshot exposes two endpoints:

  1. https://api.moonshot.cn/v1/chat/completions – OpenAI-compatible (spec, migration tips).
  2. https://api.moonshot.cn/anthropic/v1/messages – identical to Anthropic’s Messages API.

Your agent code is responsible for:

  1. Describing the available tools in the tools field of the chat request.
  2. Parsing the model’s tool_calls response (if any) and invoking the corresponding tools.
  3. Shipping the tool results back to the model in the next turn.

If you want to drive those tools through an MCP (Model-Context-Protocol) server, just point your agent framework at the MCP server; the Moonshot API itself never talks to MCP—it only sees ordinary tool definitions and tool-call messages.

Practical pointers

Experience without coding
Open Kimi Playground → “MCP Servers”, use your custom MCP Servers, or paste your ModelScope token, tick the tools you want, and chat—zero code required.

Need a serverless tool?
Use a Formula URI like moonshot/web-search:latest; fetch its schema with GET /formulas/{uri}/tools and invoke with POST /formulas/{uri}/fibers. Full example: https://platform.moonshot.ai/docs/guide/use-formula-tool-in-chatapi . You may also try it using the above playground.

So the short answer is: you don’t send an “MCP endpoint” to Kimi; you send ordinary tool definitions, and your local agent (or Playground) can source those tools from anywhere—MCP included.

Additionally, you may be interested in this link: kimi-cc/README_EN.md at main · LLM-Red-Team/kimi-cc · GitHub

@yuikns Thanks for the information, but I still don’t quite understand, sorry. This is quite new to me. Let’s be specific then.

I’m using the https://api.moonshot.cn/v1/chat/completions endpoint and would like to allow K2 to use MCP of Supabase.com.

I’m looking for something like what OpenAI documentation shows in their documentation:

curl https://api.openai.com/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
  "model": "o4-mini-deep-research",
  "input": [
    {
      "role": "developer",
      "content": [
        {
          "type": "input_text",
          "text": "You are a research assistant that searches MCP servers to find answers to your questions."
        }
      ]
    },
    {
      "role": "user",
      "content": [
        {
          "type": "input_text",
          "text": "Are cats attached to their homes? Give a succinct one page overview."
        }
      ]
    }
  ],
  "tools": [
    {
      "type": "mcp",
      "server_label": "cats",
      "server_url": "https://777ff573-9947-4b9c-8982-658fa40c7d09-00-3le96u7wsymx.janeway.replit.dev/sse/",
      "allowed_tools": [
        "search",
        "fetch"
      ],
      "require_approval": "never"
    }
  ]
}'

I assume that the type: mcp is a feature they specifically implemented to make this easy.
My question is if Moonshot API can do the same thing, basically. :slight_smile: Maybe it’s a feature request.

The endpoint in your example is OpenAI’s new /v1/responses (docs), not the familiar /v1/chat/completions (docs).
Responses can auto-execute several tool rounds and return a final summary in one shot, so it lets you drop in "type": "mcp" plus a server URL and forget about the plumbing.

Our current https://api.moonshot.cn/v1/chat/completions( or https://api.moonshot.ai/v1/chat/completions) follows the classic Chat Completions spec: one request → one reply (maybe with tool_calls) → conversation ends. That spec has no field for "server_url" or "type": "mcp" and handle tool execution on server side; you would have to handle any MCP traffic yourself.

We’re actively looking at supporting that newer endpoint. Thanks for the nudge!

2 Likes