Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It feels far too early for a protocol that's barely a year old with so much turbulence to be donated into its own foundation under the LF.

Alot of people don't realize this, but the foundations that wrap up to the LF have revenue pipelines that are supported by those foundations events (like Kubecon brings in ALOT of money for the CNCF), courses, certifications, etc. And, by proxy, the projects support those revenue streams for the foundations they're in. The flywheel is _supposed_ to be that companies donate to the foundation, those companies support the projects with engineering resources, they get a booth at the event for marketing, and the LF can ensure the health and well-being of the ecosystem and foundation through technical oversight committees, elections, a service-desk, owning the domains, etc.

I don't see how MCP supports that revenue stream nor does it seem like a good idea at this stage: why get a certification for "Certified MCP Developer" when the protocol is evolving so quickly and we've yet to figure how OAuth is going to work in a sane manner?

Mature projects like Kuberentes becoming the backbone of a foundation, like it did with CNCF, makes alot of sense: it was a relatively proven technology at Google that had alot of practical use cases for the emerging world of "cloud" and containers. MCP, at least for me, has not yet proven it's robustness as a mature and stable project: I'd put it into the "sandbox" category of projects which are still rapidly evolving and proving their value. I would have much preferred for Anthropic and a small strike team of engaged developers to move fast and fix alot of the issues in the protocol vs. it getting donated and slowing to a crawl.





At the same time, the protocol's adoption has been 10x faster than Kubernetes, so if you count by this metric, it actually makes sense to donate it now to let others actors in. For instance, without this Google will never fully commit to MCP.

comparing kubernetes to what amounts to a subdirectory of shell scripts and their man pages is... brave?

Shell scripts written by nearly every product company out there.

There are lots of small and niche projects under the Linux Foundation. What matters for MCP right now is the vendor neutrality.


Are you saying nearly every product company uses MCP? What a stretch

Welcome to the era of complex relationships with the truth. People comparing MCP to k8s is only the beginning.

Truth Has Died

Lemme ask an AI to double check that vibe.

I'd say this thread is both comparing and contrasting them...

Quaint. People 1%, AI 99%.

I meant to say every enterprise product

It doesn't matter because only a minority of product companies worldwide (regardless enterprise or not) uses MCP. I'd bet only minority uses LLMs in general.

Oh so is that "truth" or "vibes" as the sibling comments are laughing about?

No, it's just another statement with no sources just like you:)

For what it's worth, I don't write MCP servers that are shell scripts. I have ones that are http servers that load data from a database. It's nothing really all that more exciting than a REST API with an MCP front end thrown on top.

Many people only use local MCP resources, which is fine... it provides access to your specific environment.

For me however, it's been great to be able to have a remote MCP HTTP server that responds to requests from more than just me. Or to make the entire chat server (with pre-configured remote MCP servers) accessible to a wider (company internal) audience.


Honest question, Claude can understand and call REST APIs with docs, what is the added value? Why should anyone wrap a REST API with another layer? What does it unlock?

I have a service that other users access through a web interface. It uses an on-premises open model (gpt-oss-120b) for the LLM and a dozen MCP tools to access a private database. The service is accessible from a web browser, but this isn’t something where the users need the ability to access the MCP tools or model directly. I have a pretty custom system prompt and MCP tools definitions that guide their interactions. Think of a helpdesk chatbot with access to a backend database. This isn’t something that would be accessed with a desktop LLM client like Claude. The only standards I can really count on are MCP and the OpenAI-compatible chat completions.

I personally don’t think of MCP servers as having more utility than local services that individuals use with a local Claude/ChatGPT/etc client. If you are only using local resources, then MCP is just extra overhead. If your LLM can call a REST service directly, it’s extra overhead.

Where I really see the benefit is when building hosted services or agents that users access remotely. Think more remote servers than local clients. Or something a company might use for a production service. For this use-case, MCP servers are great. I like having some set protocol that I can know my LLMs will be able to call correctly. I’m not able to monitor every chat (nor would I want to) to help users troubleshoot when the model didn’t call the external tool directly. I’m not a big fan of the protocol itself, but it’s nice to have some kind of standard.

The short answer: not everyone is using Claude locally. There are different requirements for hosted services.

(Note: I don’t have anything against Claude, but my $WORK only has agreements with Google and OpenAI for remote access to LLMs. $WORK also hosts a number of open models for strictly on-prem work. That’s what guided my choices…)


Gatekeeping (in a good way) and security. I use Claude Code in the way you described but I also understand why you wouldn’t want Claude to have this level of access in production.

Ironically models are sometimes more apt at calling REST or web APIs in general because that is a huge part of their training data.

Also IIRC, K8s was perhaps less than 2 years old before it was accepted into the CNCF.

K8S was the original reason the CNCF was created.

So what of G don't commit? If MCP is so good, it can stand w/o them.

I don't see a future in MCP; this is grandstanding at at it's finest.

This is a land grab and not much else.

Isn't MCP publicly older than Kubernetes when it was donated to CNCF?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: