An Introduction to MCP

An Introduction to MCP

Model Context Protocol, or MCP, has been rapidly gaining attention in the AI community. MCP represents a powerful shift in how we can interact with AI systems, making complex operations as simple as having a conversation. In this article, we'll explore what MCP is, why it's becoming increasingly important in the AI landscape, and how SZNS plans to utilize it.

What is MCP?

At its foundation, MCP is a framework developed by Anthropic that creates a standardized approach for LLMs to interact with external tools. These "tools" are specialized functions that developers code, allowing the AI agent to perform specific tasks. The AI's role is to intelligently analyze user requests, determine which tools are needed, and provide the appropriate arguments to execute these functions effectively.

MCP has two main components:

  • The server defines custom tools that can be called by an AI agent.
  • The client is the LLM-powered agent that interprets user input and calls those tools as needed.

Most developers only need to build a server. Clients like Claude, Gemini, or Goose already know how to speak MCP. They just need to connect to your server and get your tool definitions.

Why is MCP Useful?

MCP bridges the gap between natural human communication and time consuming operations. Consider this example: imagine having an MCP server that specializes in providing weather info. Instead of searching through multiple weather websites and location inputs, you could simply query through the MCP client "Hey, what's the temperature right now in Tokyo?" The AI understands this natural request and automatically triggers the appropriate weather API calls through tools that the developer has already created and returns the answer.

While this weather example might not seem particularly revolutionary, it demonstrates a powerful concept. MCP extends AI capabilities beyond text generation or image creation, by allowing the AI to intelligently call user created functions. The possibilities of applying this framework are virtually endless.

A key advantage of MCP is its standardization, which allows integration across different AI clients and language models. This means developers can easily connect their MCP tools to various AI systems with limited or no additional modifications.

MCP at SZNS

After exploring various MCP implementations both internally and with our customers, we identified several promising approaches. One particularly effective use case emerged when we developed a specialized MCP server focusing on Google Cloud Platform (GCP) functionality. Users can make natural language queries like "Create a GCP bucket in project example-123 with the archive storage policy," and the AI agent calls the correct function with the parameters listed out in the query. This eliminates the need for complex command-line operations or navigation through multiple console menus on GCP. As SZNS specializes in GCP services, this implementation could increase internal productivity significantly.

One of MCP's advantages is its accessibility to developers. The implementation process is straightforward. First, the developer must define and test the necessary functions like you would for any other program. The only difference is that we have to include the MCP decorator at the top of the function to allow the library to recognize this code as a tool. Below is an example one of the tools included in the MCP’s toolbox:

An example MCP tool

From here, we can go a few different directions concerning the client side. Anthropic, the creator of both MCP and Claude, provides seamless integration between their language model and MCP servers through a desktop instance. Goose is another powerful framework that allows the user to load in tools to instances as extensions that can be shared with others. Other options include Gemini, which has strong Python integration, and AgentSpace, which focuses on multi-agent coordination. For this example, we'll proceed with Claude's implementation to demonstrate the core concepts.

Now that we have our tools, all we need to do is format our code in the standard Python project format and provide Claude an executable entry point in our pyproject.toml:

pyproject.toml for example MCP Project

With these two steps complete, we can load up our new MCP on Claude Desktop and verify that it recognizes the tools we built. To set this up in Claude Desktop, open the developer settings and provide your server configuration in the claude_desktop_config.json file, including the file path, command argument, and name. When this is all set up, we can see that Claude recognizes our MCP tools:

Claude Desktop listing tools from newly created MCP server

From here we can call our tools in natural language!

Claude Desktop executing the list_buckets tool

This demonstration shows just a small sample of what's possible. With a more extensive MCP toolbox, the potential applications are essentially limitless.

Future Steps for MCP and SZNS

After seeing how powerful this internal proof of concept was, SZNS is exploring expansions of our MCP capabilities. One possible idea is to create a large internal MCP server with access to an extensive collection of toolboxes that can be easily modified and expanded by our engineers. This implementation would significantly enhance our productivity across any platform that has API functionality.

Several major AI companies now offer MCP integration, including Anthropic's Claude, Google's Gemini, OpenAI’s GPT series, and dedicated frameworks like Goose. This gives developers flexibility in choosing how to build their MCP applications.

We're actively researching additional frameworks to determine what works best for our clients. At SZNS, we believe MCP represents a significant opportunity to elevate our use of AI. If you are curious to learn more about MCP or have any questions, please reach out!