> But even after a few hours of reading about what MCP is and working through an example , it can be confusing to follow exactly what is happening when and where. What does the LLM do? What does the MCP server do? What does the MCP client do? Where does data flow, and where are choices made?
Yeah MCP is the worst documented technology I have ever encountered. I understand APIs for calling LLMs, I understand tool calling APIs. Yet I have read so much about MCP and have zero fucking clue except vague marketing speak. Or code that has zero explanation. What an amateur effort.
I've given up, I don't care about MCP. I'll use tool calling APIs as I currently do.
I find the opposite after reading the spec. Did you read the spec? I mean the actual spec. not Python API documentation and such. :)
It’s just JSON RPC between a client, one or more servers. The AI agent interaction is not part of what the protocol is designed for except for re-prompting requests made by tools. It has to be AI agnostic.
For tool call workflow: (a) client requests the list of tools from the known servers, then it forwards those (possibly after translating to API calls like OpenAI toolcall API) to any AI agents it wants; when the AI then wants to call a tool (b) it returns a request that needs to be forwarded to the MCP server for handling; and (c) return the result back to the AI.
The spec is actually so simple no SDK is even necessary you could just write a script in anything with an HTTP client library.
Oh, there's a spec! Something concrete, with definitions?! I'm starting to read now, and for the first time I understand something concrete, even if it's still somewhat verbose.
I've spent so much time clicking through pages and reading and not understanding, but without finding the spec. Thanks so much!
Yeah it is not a well thought spec. There is a big confusion about what is a MCP Client and what is a MCP Host. Which is a useless separation as what they call in the spec a client is just a connection to a server while MCP host is what is a real client (the apps using MCP like claude desktop, cli tools, etc).
But the host application does much more than just connect to MCP servers, as the host is one-to-many client connections. The host application also has OTHER client connections to AI agents and so-forth.
I think it can be confusing in general it’s like understanding X11 where the client-server relationship is conceptually flipped. :)
This blog post is miles better than MCP spec, which yes, described what you should do but doesn't really differentiate from what's beyond JSON-RPC + Auth. I think that's the point though. It is really just a RPC layer for LLM and by keeping it "generic", LLM can do anything with it.
I have trouble understanding the level of criticism about MCPs. As I understand it, it's just a tool that allows an LLM to communicate with other tools.
People often talk about web APIs, but we should also consider the integration of local tools. For me, the integration is mind-blowing.
When I tried the Playwright MCP integration [0][1] a few months ago, I really felt that after giving computers the ability to speak or communicate, we had now given them arms.
I still get goosebumps thinking about it.
Same here. Built a very rough Cucumber spec+Playwright test script generator on top of Playwright MCP and a Claude project.
Pasting in a product owner's AC and and watching it browse through our test env for a few minutes before spitting out a passing - and passable - spec+test was kind of mind blowing.
One confusing thing to me was the word "server". An "MCP server" is a server to the LLM "client". But the MCP server itself is a client to the thing it's connecting the LLM to. So it's more like an adapter or proxy. Also I was confused because often this server runs on your local system (although it doesn't have to). In my mind I thought if they're calling it a server it must be run in the cloud somewhere but that's often not the case.
MCP is supposed to support both concepts of a local and a remote server, but in practice most have opted to build local servers and the tooling basically only supports that which is a shame and, in my opinion, a nonsensical choice that basically only has downsides (you need to maintain the local server, your customers need to install it, you have to remain retro-compatible with your local server, etc.).
This just continues to reinforce my feeling that everything around vibe coding and GenAI-first work is extremely shortsighted and poor quality.
Not more than what local servers do. You don't seem to understand what MCP is. Regardless of whether the MCP "server" is local or remote, it is JUST a wrapper around APIs. It's basically a translation layer to make your APIs adhere to the MCP spec, that's it.
Whether that wrapper's code runs on your laptop or a remote server changes nothing in terms of data exfiltration capabilities. If anything, it would make it more secure to have a remote server since at least you'd have full control over the code that's calling your API.
Right but at least in the case of a local instance, the risk profile is shifted to the use of the computer. A less than ideal situation for sure, but on the other hand a user should be able to do just about anything they want to with hardware they own.
I'm talking about MCP servers that call 3rd party APIs, like your local MCP server calling the Jira instance of your company, the Google Maps API, etc.
Obviously local MCP servers make sense to interact with applications that you have installed locally, but that's by far not their only use.
It's a half-baked, rushed out, speculative attempt to capture developer mindshare and establish an ecosystem/moat early in a (perceived) market. It's a desperate "standard" muscled in by Amazon/Claude, similar to their overwrought "Smithy" IDL that basically nobody outside the Amazon SDK team chooses to use for API/Schema management. It will end up in that same niche in the long term, most likely... AWS/Amazon/Claude specific app integrations, buried underneath some other 3rd party framework that abstracts it away and makes the "spec" irrelevant.
MCP and Smithy aren’t comparable. Smithy is an internal tool used by almost every single team (it is used far, far more widely than just the SDK teams) at Amazon to define APIs and generate API servers/clients. It was released publicly because “why not?”, but I assure you that Amazon doesn’t care if you use it or not.
As long as MCP "just works" (and it does) and is relatively simple enough, then simply by being first, rather than being best, is what made it successful.
It's already gone so viral it's practically entrenched already, permanently. Everyone has invested too much time saying how much they love MCP. If we do find something cleaner it will still be called MCP, and it will be considered a 'variation' (new streaming approach maybe) on MCP rather than some competitor protocol replacing it. Maybe it will be called 'MCP 2.0' but it will be mostly the same and retain the MCP name for decades to come, I think.
Anyone who has worked with LLMs for non-trival tasks know how poorly they handle JSON vs other formats (they do notably well with XML for some reason but even YAML seems to be handled fine).
MCP forcing JSON for tool specifications seems like a massive mistake.
Maybe Google can save us with something built on top of protobuffs.
The entire MCP mess is not even necessary with protobufs. Just give the LLM a gRPC server endpoint. Done.
No need to invent protocol for listing the tools or listing their schema. Just ask the gRPC server for the supported methods, look at the protobuf schema. This is mostly solved and supported out of the box. One potential improvement would be to have the server reply with original protobuf source, including comments, for even better semantic understanding.
No need for the absolute disaster of multiple HTTP requests + SSE, servers which need state to deal with session ids and all the problems that causes. It's just a gRPC channel and streaming methods.
And auth? Just shove credentials into the metadata. We can standardize that format, or have server reply what it supports.
Sigh... I feel like ten years ago garbage like this would be ignored or replaced with something actually sensible. But now nobody cares or feels the pain of the bad spec, they just vibe code some more mess on top of it and keep growing the ~ecosystem~ swampland.
MCP is practically useful, but the total lack of security in its "design" for me just underlines the type of YOLO-driven development and lack of quality that's being marketed as productivity improvement in software engineering too often these days.
If you look at stdio-based, local tooling problem for code assitants as the primary goal I'm not sure if it's YOLO or that they just don't care/ feel the need to address the security problems before the world rushes to build public servers.
MCP Clients need to support auth (and probably the spec needs to have a broader set of options for auth) - this is going to be a major blocker for adoption.
The lack of some form of session setup process in the core protocol (not the current 'session' setup that negotiates the protocol) is certainly a PITA. I've been working on using MCP in a multi-tenant setup and it basically means I can't use any MCP Server as delivered at this point. Conceptually MCP is great. In certain single-user scenarios it is great. I think it'll eventually be great for me once the use case of "multi-tenant gateway service" becomes feasible.
Most providers don't support auth in their client implementations yet. Means it's only good for calling into public data. Private enterprise data is where there's huge value.
Not to complain but this "introduction" would've been better if it was just a simple tool to add numbers to make an LLM able to solve "What is 10 + 50?" using a remote tool. By solving a complex problem you've just added unnecessary complexity. Everyone would've already known how to extend a function call to solve some other set of problems. Sure it made the intro more "impressive" as an actual accomplishment, but seems like counterproductive impressiveness bordering on just showing off. lol. Nice work tho. I was impressed.
that's the missing piece in most of these description.
You send off a description of the tools, the model decides if it wants to use one, then you run it with the args, send it back to the context and loop.
I found that the other day and finally got what MCP is. Kinda just a convenience layer for hooking up an API via good "old" tool use.
Unless I'm missing something major, it's just marginally more convenient than just hooking up tool calls for, say, OpenAPI. The power is probably in the hype around it more than it's on technical merits.
I had a fun one yesterday. The `mcp-atlassian` server failed trying to create multiple Jira tickets. The error response (and error logs) was just a series of newlines (one for each ticket we wanted to create). Turned out the issue was the LLM decided to mis-capitalize the project code. My best guess is it read the product name, which has the same letters but not fully uppercase, and used that instead of the Jira project code which was also provided in the context.
The ideal is that you can simply connect to whatever MCP Server endpoint you need, without needing to code your own tools.
The reality is that the space is still really young and people are figuring things out as they go.
The number of people that have no real clue what they are doing that are jumping in is shocking. Relatedly, the number of people that can't see the value in a protocol specifically designed to work with LLM Tool Calling is equally shocking. Can you write code that glues an OpenAPI Server to an LLM-based Tool Calling Agent? 100%! Will that setup flood the context window of the LLM? Almost certainly. You need to write code to distill those OpenAPI responses down to some context the LLM can work with, respecting the limited space for context. Great, now you've written a wrapper on that OpenAPI server that does exactly that. And you've written, in essence, a basic MCP Server.
Now, if someone were to write an MCP Server that used an LLM (via the LLM Client 'sampling' feature) to consume an OpenAPI Server Spec and convert it into MCP Tools dynamically, THAT would be cool. Basically a dynamic self-coding MCP Server.
It's a vibe-coded protocol that lets LLM models query external tools.
You write a wrapper ("MCP server") over your docs/apis/databases/sites/scripts that exposes certain commands ("tools"), and you can instruct models to query your wrapper with these commands ("calling/invoking tools") and expect responses in a certain format that they can then use.
That is it.
Why vibe-coded? Because instead of bi-directional websockets the protocol uses unidirectional server-side events, so you need to send requests to a separate endpoint and then listen to the SSE hoping for an answer. There's also non-existent authentication.
You are complaining about the transport aspect of the specification.
The protocol could easily be transported over websockets. Heck, since stdio is one transport, you could simply pipe that over websockets. Of course, that leaves a massive gap around authn and authz.
The Streamable HTTP transport includes an authentication workflow using OAuth. Of course, that only addresses part of the issue.
There are many flaws that need improvement in MCP, but railing against the current transports by using a presumably denigratory term ("vibe-coded") isn't helpful.
Your "that is it" stops at talking about one single aspect of the protocol. On the server side you left out resources and prompts. On the client side you left out sampling, which I find to be a very interesting possibility.
I think MCP has many warts that need addressing. I also think it's a good start on a way to standardize connections between tools and agents.
The choice of transport is just one, quite telling, aspect of this mess.
Could these commands be executed over websockets? Yes, they could. Will they? No, because the specification literally only defines two transports, and all of the clients only support those.
As with any hype, the authors drink their own coolaid, invent their own terminology, and ignore literally everything that came before them.
"tools" are nothing but RPC calls (that's why the base of this is JSON RPC)
"resources"? PHP could do an fopen on remote URLs in the 90s. It literally is just that: "Each resource is identified by a unique URI and can contain either text or binary data." You don't say.
"sampling"? It literally is just bi-directional communication. "servers request data from the client by sending commands". What a novel idea, must have a new name and marketing blurb about "powerful MCP feature, enabling sophisticated agentic behaviors while maintaining security and privacy."
As for auth, again, MCP doesn't have it, and expects you to just figure it out yourself. The entirety of the "spec" on it is just "MCP provides an Authorization framework for use with HTTP and your expected to conform to this spec". There's no spec. Edit: to be clear. At the point of writing all mentions of "MCP Auth Spec" on the internet link to https://modelcontextprotocol.io/specification/2025-03-26 which at the time of writing contains zero mentions of OAuth and says nothing about auth (and is not a spec to begin with) [1]
And so on.
It's hype-driven vibe-coded development at its finest.
functions that an LLM can use in its reasoning are called "tools", so the prior is probably more correct in the sense that an API can be used to provide the LLM tools
My eye twitches every time I see something like "a lot of MCPs are". It's probably a lost cause at this point, but it's an MCP Server, not an MCP. And the other side of that connection would be an MCP Client that lives in an MCP Host which almost certainly could simply be called an Agent.
Before the whole "just use OpenAPI" crowd arrives, the point is that LLMs work better with curated context. An OpenAPI server not designed for that will quickly flood an LLM context window.
it's so apt that one of the most common question/statements I hear is why not use OpenAPI? I don't know the answer. Or WTF is streaming HTTP? Sure feels like we're trying to reinvent web sockets. It must be either #notinventedhere or while the genius devs build the LLMs the interns do the documentation and SDKs
"“MCP is an open protocol that standardizes how applications provide context to LLMs, what’s the problem?”"
We are already off to a wrong start, context has a meaning specific to LLMs, everyone who works with LLMs knows what it means: the context is the text that is fed as input at runtime to LLM, including the current message (user prompt) as well as the previous messages and responses by the LLM.
So we don't need to read any further and we can ignore this article, and MCPs by extension, YAGNI
This is a really shallow dismissal, and I say that as someone who is outspokenly critical of MCP [0].
As you yourself say, the context is the text that is fed as input at runtime to an LLM. This text could just always come from the user as a prompt, but that's a pretty lousy interface to try to cram everything that you might want the model to know about, and it puts the onus entirely on the user to figure out what might be relevant context. The premise of the Model Context Protocol (MCP) is overall sound: how do we give the "Model" access to load arbitrary details into "Context" from many different sources?
This is a real problem worth solving and it has everything to do with the technical meaning of the word "context" in this context. I'm not sure why you dismiss it so abruptly.
But that's not what MCP does. It is a tool created by anthropic ( 2nd most used LLM) to provide portabiliry and vendor neutrality between different LLMs. It's like terraform for LLMs.
Also providing data through function calls/tool use is not context, you are overloading the term. Context is LLM context, if you fetch from a db it's something else
> But that's not what MCP does. It is a tool created by anthropic ( 2nd most used LLM) to provide portabiliry and vendor neutrality between different LLMs. It's like terraform for LLMs.
Given that your only contributions in this thread are to acknowledge your ignorance of MCP [0] and to post the summary dismissal upthread that shows your ignorance of it, it would probably behoove you to actually learn about MCP more before confidently making assertions about it. Suffice it to say that this is inaccurate and others have already explained in your "WTF is it" thread what MCP actually is.
> Also providing data through function calls/tool use is not context, you are overloading the term. Context is LLM context, if you fetch from a db it's something else
If you believe this then you don't understand how tool use is implemented. It's literally accomplished by injecting a tool's response into the context [1].
As a general life tip: most pedants are wrong most of the time. If you find yourself being pedantic, take a few steps back and double check that you're not just wrong.
If you believe this then you don't understand how tool use is implemented. It's literally accomplished by injecting a tool's response into the context [1].
I was doing tool use before chatgpt released an official API for function calls. You literally give ChatGPT API specs and ask it to generate call parameters.
The API is fed into the LLM as context, the response is part of the output. Whether you pass that output through another layer of LLM is trivial. And even if you do, the "context" in that case would be only the response, not the whole database. You are confusing even yourself, you accepting the overloading of the word 'context' (pushed by a company for commercial purposes) and you are now unable to distinguish between LLM context in terms of tokens, an external data source, and a response fetched by the tool.
It's not that I am ignorant of what Anhtropic claims Context means, I'm contesting it. If Microsoft releases a new product and claims that Intelligence is the parameters of their Microsoft Product, then it pays to be a bit cynical instead of parroting whatever they say like some unpaid adman
I couldn't care less about Anthropic or MCP—as I noted, I'm a critic of MCP—but pedants bug me quite a bit especially when they're wrong.
> The API is fed into the LLM as context, the response is part of the output.
So you implement tool use by feeding an API into the LLM as context in order to get it to produce call parameters. Got it.
> Whether you pass that output through another layer of LLM is trivial. And even if you do, the "context" in that case would be only the response
So the output of the tool when called with those parameters can be fed back into the LLM as further context. Got it.
Given the above, it seems that we agree that tool use is implemented entirely by giving selected bits of context to the model.
With that in mind, if one were to design a protocol that makes tool use plug-and-play instead of something that has to be coded by hand for each tool—a protocol designed to allow a model to discover tool APIs that it might want to bring into context and then use those APIs to bring their outputs into context—it would be reasonable to call said protocol the Model Context Protocol, because it's all about getting specific bits of Context into a Model.
I'm not sure why the word "context" is the hill you decided to die on here when there is so much else to pick on with MCP, but it's time to get off the hill.
That something can be context if you feed it as input to the LLM and that output will be input, is true for everything in an LLM. So you are not really conveying any meaning with that definition of MCP. MCP is an API layer between LLMs and SaaS applications, designed to provide vendor neutrality for the LLMs. Nothing to do with the context window, which is a specific variable measured in kTokens
It pays to be precise when speaking and studying, and it pays to develop a precise language on nascent technologies when we comunicate about them.
This reminds me when I was studying chemistry and I thought they were pedantic for the way they used the word salt. Or when I studied chess and I called every bishop and knight attack to the f6 pawn the fried liver, instead of the specific sequence of moves that we call the fried liver. Or when I thought that the arm forearm distinction was pedantic in medicine
Science demands precision in communication, feel free to steal a well defined term and use it to mean something else that already has a different sign to denote it. But I'm not playing
Yeah MCP is the worst documented technology I have ever encountered. I understand APIs for calling LLMs, I understand tool calling APIs. Yet I have read so much about MCP and have zero fucking clue except vague marketing speak. Or code that has zero explanation. What an amateur effort.
I've given up, I don't care about MCP. I'll use tool calling APIs as I currently do.
It’s just JSON RPC between a client, one or more servers. The AI agent interaction is not part of what the protocol is designed for except for re-prompting requests made by tools. It has to be AI agnostic.
For tool call workflow: (a) client requests the list of tools from the known servers, then it forwards those (possibly after translating to API calls like OpenAI toolcall API) to any AI agents it wants; when the AI then wants to call a tool (b) it returns a request that needs to be forwarded to the MCP server for handling; and (c) return the result back to the AI.
The spec is actually so simple no SDK is even necessary you could just write a script in anything with an HTTP client library.
I've spent so much time clicking through pages and reading and not understanding, but without finding the spec. Thanks so much!
I think it can be confusing in general it’s like understanding X11 where the client-server relationship is conceptually flipped. :)
try it and you'll figure out
People often talk about web APIs, but we should also consider the integration of local tools. For me, the integration is mind-blowing.
When I tried the Playwright MCP integration [0][1] a few months ago, I really felt that after giving computers the ability to speak or communicate, we had now given them arms. I still get goosebumps thinking about it.
[0]https://youtu.be/3NWy_sxD3Vc [1]https://github.com/microsoft/playwright-mcp [EDIT]
Pasting in a product owner's AC and and watching it browse through our test env for a few minutes before spitting out a passing - and passable - spec+test was kind of mind blowing.
And prople are skipping on service discovery. Making ai know what steps / operation is good.
Same. To see apps reverse engineered by LLMs with Ghidra [0] blew me away. It CTFed-out hard-coded access tokens and keys from .so's in seconds.
[0] https://github.com/LaurieWired/GhidraMCP
This just continues to reinforce my feeling that everything around vibe coding and GenAI-first work is extremely shortsighted and poor quality.
Whether that wrapper's code runs on your laptop or a remote server changes nothing in terms of data exfiltration capabilities. If anything, it would make it more secure to have a remote server since at least you'd have full control over the code that's calling your API.
Obviously local MCP servers make sense to interact with applications that you have installed locally, but that's by far not their only use.
Will it be supplanted? Perhaps. But it's not going to die a natural death.
It's already gone so viral it's practically entrenched already, permanently. Everyone has invested too much time saying how much they love MCP. If we do find something cleaner it will still be called MCP, and it will be considered a 'variation' (new streaming approach maybe) on MCP rather than some competitor protocol replacing it. Maybe it will be called 'MCP 2.0' but it will be mostly the same and retain the MCP name for decades to come, I think.
MCP forcing JSON for tool specifications seems like a massive mistake.
Maybe Google can save us with something built on top of protobuffs.
No need to invent protocol for listing the tools or listing their schema. Just ask the gRPC server for the supported methods, look at the protobuf schema. This is mostly solved and supported out of the box. One potential improvement would be to have the server reply with original protobuf source, including comments, for even better semantic understanding.
No need for the absolute disaster of multiple HTTP requests + SSE, servers which need state to deal with session ids and all the problems that causes. It's just a gRPC channel and streaming methods.
And auth? Just shove credentials into the metadata. We can standardize that format, or have server reply what it supports.
Sigh... I feel like ten years ago garbage like this would be ignored or replaced with something actually sensible. But now nobody cares or feels the pain of the bad spec, they just vibe code some more mess on top of it and keep growing the ~ecosystem~ swampland.
Claude for example uses XML to generate mcp tool usage. At least top level strings don't need to be json encoded.
Most of the material on MCP is either too specific or too in depth.
WTF is it?! (Other than a dependency by Anthropic)
that's the missing piece in most of these description.
You send off a description of the tools, the model decides if it wants to use one, then you run it with the args, send it back to the context and loop.
Unless I'm missing something major, it's just marginally more convenient than just hooking up tool calls for, say, OpenAPI. The power is probably in the hype around it more than it's on technical merits.
The reality is that the space is still really young and people are figuring things out as they go.
The number of people that have no real clue what they are doing that are jumping in is shocking. Relatedly, the number of people that can't see the value in a protocol specifically designed to work with LLM Tool Calling is equally shocking. Can you write code that glues an OpenAPI Server to an LLM-based Tool Calling Agent? 100%! Will that setup flood the context window of the LLM? Almost certainly. You need to write code to distill those OpenAPI responses down to some context the LLM can work with, respecting the limited space for context. Great, now you've written a wrapper on that OpenAPI server that does exactly that. And you've written, in essence, a basic MCP Server.
Now, if someone were to write an MCP Server that used an LLM (via the LLM Client 'sampling' feature) to consume an OpenAPI Server Spec and convert it into MCP Tools dynamically, THAT would be cool. Basically a dynamic self-coding MCP Server.
Conversely, it allows many different LLMs to get context via many different Applications using a standard prodocol.
It addresses an m*n problem.
You write a wrapper ("MCP server") over your docs/apis/databases/sites/scripts that exposes certain commands ("tools"), and you can instruct models to query your wrapper with these commands ("calling/invoking tools") and expect responses in a certain format that they can then use.
That is it.
Why vibe-coded? Because instead of bi-directional websockets the protocol uses unidirectional server-side events, so you need to send requests to a separate endpoint and then listen to the SSE hoping for an answer. There's also non-existent authentication.
The protocol could easily be transported over websockets. Heck, since stdio is one transport, you could simply pipe that over websockets. Of course, that leaves a massive gap around authn and authz.
The Streamable HTTP transport includes an authentication workflow using OAuth. Of course, that only addresses part of the issue.
There are many flaws that need improvement in MCP, but railing against the current transports by using a presumably denigratory term ("vibe-coded") isn't helpful.
Your "that is it" stops at talking about one single aspect of the protocol. On the server side you left out resources and prompts. On the client side you left out sampling, which I find to be a very interesting possibility.
I think MCP has many warts that need addressing. I also think it's a good start on a way to standardize connections between tools and agents.
Could these commands be executed over websockets? Yes, they could. Will they? No, because the specification literally only defines two transports, and all of the clients only support those.
As with any hype, the authors drink their own coolaid, invent their own terminology, and ignore literally everything that came before them.
Even reading through explanations on the once again vibe-coded https://modelcontextprotocol.io/ you can't help to wonder why.
"tools" are nothing but RPC calls (that's why the base of this is JSON RPC)
"resources"? PHP could do an fopen on remote URLs in the 90s. It literally is just that: "Each resource is identified by a unique URI and can contain either text or binary data." You don't say.
"sampling"? It literally is just bi-directional communication. "servers request data from the client by sending commands". What a novel idea, must have a new name and marketing blurb about "powerful MCP feature, enabling sophisticated agentic behaviors while maintaining security and privacy."
As for auth, again, MCP doesn't have it, and expects you to just figure it out yourself. The entirety of the "spec" on it is just "MCP provides an Authorization framework for use with HTTP and your expected to conform to this spec". There's no spec. Edit: to be clear. At the point of writing all mentions of "MCP Auth Spec" on the internet link to https://modelcontextprotocol.io/specification/2025-03-26 which at the time of writing contains zero mentions of OAuth and says nothing about auth (and is not a spec to begin with) [1]
And so on.
It's hype-driven vibe-coded development at its finest.
[1] The auth spec is here: https://modelcontextprotocol.io/specification/2025-03-26/bas... I don't think anything on the site links to this directly. I found the link from some github discussion. See issues with it here: https://blog.christianposta.com/the-updated-mcp-oauth-spec-i...
but that doesnt have to be necessarily negative
Awful case of "not invented here" syndrome
I'm personally interested in if WebTransport could be the basis for something better
I like this succinct explanation.
https://en.wikipedia.org/wiki/List_of_Tron_characters#Master...
But to save you the click & read: it's OpenAPI for LLMs
Before the whole "just use OpenAPI" crowd arrives, the point is that LLMs work better with curated context. An OpenAPI server not designed for that will quickly flood an LLM context window.
We are already off to a wrong start, context has a meaning specific to LLMs, everyone who works with LLMs knows what it means: the context is the text that is fed as input at runtime to LLM, including the current message (user prompt) as well as the previous messages and responses by the LLM.
So we don't need to read any further and we can ignore this article, and MCPs by extension, YAGNI
As you yourself say, the context is the text that is fed as input at runtime to an LLM. This text could just always come from the user as a prompt, but that's a pretty lousy interface to try to cram everything that you might want the model to know about, and it puts the onus entirely on the user to figure out what might be relevant context. The premise of the Model Context Protocol (MCP) is overall sound: how do we give the "Model" access to load arbitrary details into "Context" from many different sources?
This is a real problem worth solving and it has everything to do with the technical meaning of the word "context" in this context. I'm not sure why you dismiss it so abruptly.
[0] https://news.ycombinator.com/item?id=43949503
Also providing data through function calls/tool use is not context, you are overloading the term. Context is LLM context, if you fetch from a db it's something else
Given that your only contributions in this thread are to acknowledge your ignorance of MCP [0] and to post the summary dismissal upthread that shows your ignorance of it, it would probably behoove you to actually learn about MCP more before confidently making assertions about it. Suffice it to say that this is inaccurate and others have already explained in your "WTF is it" thread what MCP actually is.
> Also providing data through function calls/tool use is not context, you are overloading the term. Context is LLM context, if you fetch from a db it's something else
If you believe this then you don't understand how tool use is implemented. It's literally accomplished by injecting a tool's response into the context [1].
As a general life tip: most pedants are wrong most of the time. If you find yourself being pedantic, take a few steps back and double check that you're not just wrong.
[0] https://news.ycombinator.com/item?id=44011320
[1] https://platform.openai.com/docs/guides/function-calling
I was doing tool use before chatgpt released an official API for function calls. You literally give ChatGPT API specs and ask it to generate call parameters.
The API is fed into the LLM as context, the response is part of the output. Whether you pass that output through another layer of LLM is trivial. And even if you do, the "context" in that case would be only the response, not the whole database. You are confusing even yourself, you accepting the overloading of the word 'context' (pushed by a company for commercial purposes) and you are now unable to distinguish between LLM context in terms of tokens, an external data source, and a response fetched by the tool.
It's not that I am ignorant of what Anhtropic claims Context means, I'm contesting it. If Microsoft releases a new product and claims that Intelligence is the parameters of their Microsoft Product, then it pays to be a bit cynical instead of parroting whatever they say like some unpaid adman
> The API is fed into the LLM as context, the response is part of the output.
So you implement tool use by feeding an API into the LLM as context in order to get it to produce call parameters. Got it.
> Whether you pass that output through another layer of LLM is trivial. And even if you do, the "context" in that case would be only the response
So the output of the tool when called with those parameters can be fed back into the LLM as further context. Got it.
Given the above, it seems that we agree that tool use is implemented entirely by giving selected bits of context to the model.
With that in mind, if one were to design a protocol that makes tool use plug-and-play instead of something that has to be coded by hand for each tool—a protocol designed to allow a model to discover tool APIs that it might want to bring into context and then use those APIs to bring their outputs into context—it would be reasonable to call said protocol the Model Context Protocol, because it's all about getting specific bits of Context into a Model.
I'm not sure why the word "context" is the hill you decided to die on here when there is so much else to pick on with MCP, but it's time to get off the hill.
It pays to be precise when speaking and studying, and it pays to develop a precise language on nascent technologies when we comunicate about them.
This reminds me when I was studying chemistry and I thought they were pedantic for the way they used the word salt. Or when I studied chess and I called every bishop and knight attack to the f6 pawn the fried liver, instead of the specific sequence of moves that we call the fried liver. Or when I thought that the arm forearm distinction was pedantic in medicine
Science demands precision in communication, feel free to steal a well defined term and use it to mean something else that already has a different sign to denote it. But I'm not playing
Agent LLMs are able to retrieve additional context and MCP servers give them specific, targeted tools to do so.