All Articles

Strategic Implications of the Model Context Protocol (MCP)

The real ‘AI battle’ is happening on the client side – i.e., between those building AI assistants (MCP clients). So one must ask: what incentive do data-rich tech companies have to become MCP server providers for their data? If MCP continues to gain adoption, controlling the MCP client interface would confer significant power and revenue opportunities

The Model Context Protocol (MCP) introduces a decoupled client-server model for AI integrations. In this architecture, MCP servers act as data/tool providers (exposing resources, databases, APIs, etc.), while MCP clients are AI-powered applications or assistants that consume those resources.

This design benefits several groups:

  • AI Assistant Developers (Clients): Builders of AI tools (like Anthropic’s Claude, the Cursor AI code editor, and OpenAI’s agents) gain a universal way to plug their models into many data sources. Instead of writing one-off connectors for each service, they can integrate any MCP-compliant server. As Anthropic’s announcement explains, “instead of maintaining separate connectors for each data source, developers can now build against a standard protocol,” simplifying integration efforts (Introducing the Model Context Protocol - Anthropic). Early adopters of MCP include AI-driven platforms (Claude’s desktop app, Cursor IDE) and developer tools (Zed, Replit, Codeium, Sourcegraph), all of whom want their AI features to easily retrieve relevant information from various systems. This standardization means these AI client developers can deliver richer functionality faster, making their AI assistants more powerful and context-aware.

  • Tool and Data Source Providers (Servers): Organizations can expose their data or services through MCP servers to make them accessible to AI agents. This benefits companies that want their tools to be used widely in AI workflows without having to partner individually with each AI vendor. For example, Anthropic released reference MCP servers for Google Drive, Slack, GitHub, databases, etc., so that any AI assistant supporting MCP can interface with those systems. Tool developers can update or improve their MCP server independently of the clients. This modular approach (much like microservices in traditional web apps) lets different teams innovate in parallel. The MCP’s client/server architecture allows tool developers to build and update capabilities independently, making it easier to add new functionalities without overhauling the entire system. In essence, MCP creates an ecosystem in which AI assistants become more useful, and data/tool providers gain broader reach.

  • End Users and Enterprises: Indirectly, users benefit because their chosen AI assistants can connect to a wide array of data sources seamlessly. Companies can leverage AI on their proprietary data by either running an MCP server internally or using third-party servers, all while adhering to a common standard. This promises less “fragmented integrations” and a more unified AI experience across tools and datasets. It is analogous to how the USB-C standard benefited consumers by making hardware interconnections universal. Likewise, MCP could reduce friction in hooking up AI to the exact data or services needed.

In summary, those driving the development of AI clients (Anthropic, OpenAI, Cursor, etc.) stand to gain significantly from MCP as it broadens their models’ capabilities. At the same time, developers of specialized tools or data services also benefit if they adopt MCP, since their offerings can be easily plugged into many AI systems without custom adapters. The network effect grows as more clients and servers join the ecosystem, incentivizing both sides to participate.

MCP and REST APIs: Relationship and Extensions

MCP’s design builds on familiar web communication patterns, but extends them for continuous, two-way AI interactions. Under the hood, MCP can operate over HTTP-based channels. In fact, one of its supported transports uses HTTP for client-to-server calls (requests) combined with Server-Sent Events (SSE) for server-to-client streaming of responses. All communication uses JSON-RPC 2.0 message formats on top of these transports. This means that an MCP client and server exchange JSON payloads describing actions or data, much as a RESTful API would exchange JSON over HTTP – but with a crucial difference: MCP is bidirectional and session-oriented, rather than simple one-off request/response calls.

In practical terms, MCP can be seen as “built on top of REST calls” in the sense that it leverages HTTP verbs (e.g. the client sends an HTTP POST) and streaming responses (SSE) to maintain a connection. However, it is more than a traditional REST API. A standard REST API call is typically stateless and singular (one request returns one response). MCP, by contrast, maintains context across a dialogue and allows a sequence of interactions to occur over one persistent session or stream. This enables the AI client to, for example, ask an MCP server for data, get a streamed answer or follow-up events, and even issue further tool commands, all within the same session. In essence, MCP extends web API principles to support continuous, stateful conversations between AI agents and tools.

Is MCP the “HTTP of AI”? MCP aims to be for AI-tool communication what HTTP is for web pages – a universal protocol. While the analogy isn’t perfect, it underscores MCP’s use of web infrastructure (HTTP, JSON) to create a new standard for AI integrations. The recently updated MCP specification even replaced the initial HTTP+SSE approach with a more robust “streamable HTTP transport”, enhancing real-time bidirectional data flow for compatibility with more systems. MCP relies on web standards at its core but goes beyond basic REST by introducing a continuous, bidirectional RPC layer. Developers familiar with building RESTful APIs will find MCP’s patterns (HTTP endpoints, JSON messages) recognizable, but they will also encounter additional features like persistent sessions, function/tool schemas, and streaming outputs that adapt the protocol to the needs of AI agent workflows.

Will Data-Rich Companies Build MCP Servers for Their Data?

A key question for MCP’s success is whether major tech companies and enterprises with valuable proprietary data will embrace the protocol by creating MCP servers. So far, MCP’s momentum has been driven by AI-focused companies and developer tool providers, rather than the biggest data owners. Anthropic, Microsoft, and OpenAI have all thrown support behind MCP on the “client” side, ensuring their AI platforms can act as MCP clients (or hosts) that consume server-provided data. There is also early evidence of big tech involvement on the server side: for instance, Microsoft just released a Playwright MCP server that lets AI agents control a web browser via MCP. This shows at least one large player exposing a tool (web browsing capability) through the protocol. But what about companies like Amazon, Google, Apple, or large enterprises like banks?

If Meta, Amazon, or Apple sign on, MCP could soon become the universal ‘language’ of AI actions. MCP adoption by these data-rich firms is crucial for MCP to reach its full potential. Each of these companies controls unique ecosystems and data (for example, Amazon with commerce and AWS cloud data, Apple with device and health data, Meta with social data). Their incentive to build MCP servers largely depends on strategic calculations:

  • Competitive Advantage vs. Openness: Building an MCP server to expose proprietary data—such as Amazon sharing shopping or AWS service data, or a bank exposing customer financial data through MCP—could enable powerful new AI-driven applications. However, doing so might empower third-party AI assistants, possibly created by competitors, to leverage that data. Companies will ask: “Do we gain more by opening our data via MCP, or by keeping it siloed for our own AI solutions?” If the “fight for AI” is viewed as a battle among AI service providers, these companies might hesitate to become mere data providers for someone else’s AI client. For example, Amazon may prefer using its own AI—via AWS Bedrock or Alexa—to interact with its data, rather than feeding proprietary Amazon data into Claude or ChatGPT. This reduces the immediate incentive to create open MCP endpoints for their most valuable proprietary datasets.

  • Enterprise Demand and Customer Pressure: On the other hand, large enterprises might adopt MCP if their customers and partners demand interoperability. For instance, an enterprise with a trove of domain-specific data (like financial records in a bank, or inventory data in a retailer) might build MCP servers for internal use or B2B partnerships. This would allow authorized AI clients to access their data in a controlled way. Notably, Anthropic has provided tools (and even Claude’s ability) to rapidly build custom MCP servers, hinting that they expect organizations to spin up MCP servers for their own data. If enough enterprises do this internally, companies like Amazon or Oracle might provide official MCP connectors to their databases or cloud services to stay relevant in that workflow. In fact, community-driven projects have already appeared. The high community interest suggests that if the big vendors don’t act, others will rig solutions. It may actually encourage companies like Amazon to offer supported MCP integrations as a value-add for their cloud customers (to prevent unreliable third-party hacks or to promote usage of their services with AI).

  • Security and Control: Any company exposing data via MCP will likely do so with strict authentication, authorization, and auditing. The MCP spec has introduced OAuth 2.1-based authorization to address these concerns. For a bank, the decision to build an MCP server would hinge on confidence in security and compliance. If those boxes are checked, a bank could deploy MCP servers to let its internal AI assistant securely query transaction databases or document repositories, improving employee efficiency. Such an MCP server might not be public but still demonstrates adoption of the standard in a private context. As MCP matures, we may see industry-specific MCP servers (finance, healthcare, etc.) guarded by permissions, enabling AI models to tap into sensitive data on a need-to-know basis.

In summary, there is a strong incentive for data-rich organizations to join the MCP ecosystem only if they see clear benefit – either in enhancing their own AI offerings or in enabling new use-cases that involve their data (with proper safeguards). Right now, the momentum is coming from AI vendors and tool providers. Big tech companies have taken initial steps (OpenAI and Microsoft on the client side, Microsoft providing at least one server, Google reportedly “falling in line” with standardization efforts, but we have yet to see, for example, Amazon announcing “MCP endpoints” for many of its services. It’s possible that over the next year, as MCP gains traction, customer expectations could nudge more large players to provide MCP support. If Meta, Amazon, or Apple sign on with either their AI or data, it would greatly accelerate MCP becoming a universal interface. Until then, the MCP landscape might be somewhat one-sided – favoring those who consume data (AI clients) more than those who hold it.

The AI Competitive Landscape: Clients vs. Servers

My gut tells me - “The fight for AI is the MCP client fight,”. The primary competition in the AI industry will be among those who build the AI agents/clients, rather than those who provide data (servers). If MCP or similar standards succeed, access to tools and data becomes commoditized infrastructure. In such a scenario, many AI assistants could tap into the same pool of MCP servers (for email, for cloud storage, for databases, etc.), leveling the playing field on the data access dimension. What, then, differentiates one AI assistant from another? Mostly, it would be the quality of the AI model, the user experience, and the client-side features.

Indeed, we are seeing companies like OpenAI and Anthropic race to integrate MCP to enhance their own AI platforms. As of March 2025, both Claude and ChatGPT (OpenAI) have or are adding MCP support, which means each can interface with external data/tools similarly. When multiple AI systems all have access to, say, a Google Drive MCP server or a Salesforce MCP server, the competitive advantage shifts to how well each AI can make use of that data to help the user. This includes factors like the AI model’s reasoning ability, context length, fine-tuning to user needs, and how seamlessly the client application integrates into workflows (IDE plugins, desktop apps, etc.). In that sense, the “fight for AI” – i.e. who dominates the next era of AI assistants – will likely play out on the client side: companies will vie to have the go-to AI agent that users choose to interface with all their tools and data. Being the preferred “AI orchestrator” for users is analogous to being the preferred web browser or operating system in earlier tech eras.

However, it’s important to add nuance: the dynamics of client vs. server power can evolve. If certain data or tools are only available through one provider’s servers, that provider can exert influence. For example, if Amazon never exposes an MCP server for some unique dataset or service (or does so only for its own AI), then an AI client that can’t access that data is at a disadvantage for tasks requiring it. In that case, the fight might also involve securing exclusive data partnerships or building proprietary connectors. We might see a mixed landscape where core services become standardized (many common tools have MCP servers accessible to all clients), but some premium data sources remain siloed as competitive moats. For now, though, the trend with MCP’s growth and multi-company support suggests a movement towards openness. In a protocol-driven era, the value tends to shift to who implements the best client of that protocol, rather than who controls the protocol itself (since it’s open).

In conclusion, if MCP continues to gain adoption, it will reduce friction for AI systems to access a wide range of tools and data. This leveling effect means AI providers will compete on overall capability and user trust more than on exclusive integrations. The AI assistant that provides the most helpful, reliable, and secure experience will win users, assuming it can connect to all the data the user needs. Data-rich companies may initially be hesitant to feed this ecosystem, but if MCP becomes ubiquitous (and especially if they themselves start using it internally), the pressure will mount for them to participate or risk their data silos being bypassed in favor of more open alternatives. Ultimately, the landscape could resemble the web: many data providers (servers) accessible via a few dominant “browsers” or agents (AI clients). In such a landscape, controlling the client interface confers significant power and revenue opportunities, explaining why Anthropic, OpenAI, Microsoft, and others are aggressively positioning their AI agents, and why the MCP client side is a hotly contested front.

Published Mar 28, 2025

I am a software developer and more recently a generative AI consultant. I am passionate about connecting applications to generative AI.