MCP support #58
Replies: 9 comments 3 replies
-
I agree. Even more, most of the mentioned critics on MCP are invalid. MCP supports streamable HTTP as transport mechanism, so its not a "custom protocol", and it is secure out-of-the-box (if using HTTPS). And it is REST. Apart from that - MCP is more than just tools. Although tools are the most important feature, MCP also supports prompts and resources. |
Beta Was this translation helpful? Give feedback.
-
Yeah the signal loss for the openapi servers (sampling, resources, progress) becomes a non-starter beyond the most trivial of use cases where you are not starting with a REST api. That being said complaining here is not productive to anyones time. The OpenWebUI core has a Tool api that would facilitate these features. |
Beta Was this translation helpful? Give feedback.
-
MCP still has issues but evolving rapidly. It stinks OI will not support. MCPO is silly in my use cases, I'm going to wrap my apis in MCP only to convert them back to OpenAPI again? Also if my upstream used Oauth (ex Jira) then MCPO can't even be used. This doesn't help us scale, and the resource and prompt features of Mcp are very useful in a large environment. Can this be built in Open WebUI with tools and templates? Sure, but then we still need to have a MCP version for users using code tools or direct llm access. UTCP is promising and gaining popularity but still not the level of MCP. |
Beta Was this translation helpful? Give feedback.
-
I cant help but feel this is outside of the problem domain Open-WebUI is operating within. I'd be 100% fine if MCPO was released as a stand alone project/proposal that Open-WebUI supported -- but instead it had been mandated. Right now it's being presented as a "We know best; Here's a white paper worth of arguments proving it" ... but in practice it's anything but clear or desirable. If the goal is to create a new unified and popular standard the current approach is not how one gains critical support and adoption. At this point, VSCode has full support for MCP via stdio, sse, and streamable-http transports. They support all features of FastMCP tooling including elicitation, progress reporting, sampling, and more. As a previous power user of Open-WebUI I don't want to have to spin up a separate -- largely manually configured -- process just to use a tool that I already have proxied via a more complete tool. |
Beta Was this translation helpful? Give feedback.
-
I'm going to contribute here by saying that a lot of enterprise tools opening up to AI use are now provided via MCP - take for example Atlassian's Remote MCP, GitHub's own tooling at I do love that OpenAPI in general is provided as a possibility to provide tools to the AI, it's a very nice way of describing an API by involving JSON schemas for clear definitions coming from a developer's/API user's perspective. I can see some of my own code which already exposes APIs with OpenAPI specs and more specific APIs integrating this way pre-optimizing for AI. And maybe one day it will be well supported alongside MCP. But the reality right now is the tools that are already optimized for AI follow the MCP standard. MCPO is currently not gaining significant traction. The proxying is cumbersome and adds another layer of headaches around things like authorization. The effect of mandating MCPO is that it is made unnecessarily complicated to even get a jumpstart with tooling, and an artificial delay in adopting tooling for OpenWebUI is set up that way. It nukes the point of running OpenWebUI to make AI easier to use for me. I would like for this to just work and not cause more trouble than is worth it. A strong opinion, while well reasoned for, shouldn't make a product less usable when it's supposed to be part of an evolving ecosystem. |
Beta Was this translation helpful? Give feedback.
-
why cant it just be a list of mcp servers, click checkbox, and BAM....done anything that involves copying and pasting from anything to anything, in my view, isnt ready |
Beta Was this translation helpful? Give feedback.
-
ChatGPT now natively supports MCP. I think if OpenAI supports it out of the box, so can Open WebUI. |
Beta Was this translation helpful? Give feedback.
-
🚀 Open WebUI Needs Native MCP Support - The Industry Standard is Moving Without UsSummaryOpen WebUI currently supports MCP (Model Context Protocol) only through MCPO (MCP-to-OpenAPI proxy), requiring users to run an additional proxy layer. Meanwhile, the entire AI industry has rapidly adopted native MCP as the universal standard for AI agent tooling. Major players including OpenAI, Google DeepMind, Microsoft, and hundreds of development tools have implemented direct MCP support. If Open WebUI doesn't implement native MCP support soon, we risk becoming irrelevant in the AI tooling ecosystem. The Industry Reality CheckMajor Adoptions of Native MCP (2024-2025):
What This Means:Sam Altman described the adoption of MCP as a step toward standardizing AI tool connectivity, and Demis Hassabis, CEO of Google DeepMind, confirmed MCP as "rapidly becoming an open standard for the AI agentic era". Why MCPO Isn't Good EnoughWhile MCPO works as a temporary bridge, it creates significant friction: 1. Additional Complexity
2. Developer Experience Friction
3. Ecosystem FragmentationEvery major AI platform now speaks MCP natively. Open WebUI speaking only OpenAPI creates:
4. Missing MCP-Native FeaturesNative MCP provides capabilities that are lost in translation through MCPO:
The Competitive ThreatMCP is positioned to become a universal connector for AI systems and big technology companies are releasing their own MCP servers every week. Claude Desktop with native MCP provides an incredibly smooth user experience. Users can:
Open WebUI's current approach makes it the ONLY major AI interface that requires a proxy layer for MCP. This positions us as the legacy platform that hasn't adapted to industry standards. What Native MCP Support Would Enable1. Developer Ecosystem Alignment
2. Enterprise Adoption
3. Future-Proofing
Action is Needed NowThe window for adopting MCP is closing rapidly. Every month Open WebUI delays native MCP support:
Community SupportLooking at existing discussions, there's clear community demand:
Open WebUI should prioritize native MCP support as a strategic initiative for 2025. This isn't just a feature request—it's about remaining competitive and relevant in the rapidly evolving AI tooling landscape. The choice is clear: Lead by implementing native MCP support, or follow by maintaining legacy proxy approaches while the industry moves forward without us. |
Beta Was this translation helpful? Give feedback.
-
Maintainer here. I appreciate the thoughtful perspectives in this thread, and I agree with most of the substance, especially the parts calling out real-world friction around MCPO vs native MCP. But a few narratives keep getting repeated that aren’t quite right, and they make it harder to have the engineering conversation that actually moves us forward. First, to re-iterate what I wrote earlier in the other thread: we never “opposed” MCP in principle. Our hesitation was about churn and deploy-surface risk, not the idea of a common tool protocol. For a long stretch, MCP was (and to be honest still is) a moving target: multiple transport stories (stdio, SSE, “streamable HTTP”), evolving session semantics, and security guidance that changed materially across revisions. If you ship a single user desktop app, you can hide a lot of that behind a controlled runtime. If you ship a multi-tenant web app that must run behind random corporate proxies, on Docker, on bare metal, over TLS termination stacks you don’t control, with CORS/CSRF/WAF constraints, you don’t get that luxury. Every transport transition means another round of audits: XSS/injection surfaces in tool payloads, per-user isolation, token lifetimes, replay and CSRF, backpressure, DoS throttling, event ordering guarantees across reconnects, etc. The cost is multiplicative in a web product because the browser and the network edge are part of your execution environment. Streamable HTTP deserves a special call-out because it looks simple on a slide but gets gnarly in the wild. SSE through various ALBs, Nginx/Traefik, Cloudflare, corporate MITM proxies, and idle timeout policies is fragile unless you layer heartbeats, jittered reconnects, buffering caps, and out-of-band acks. Now add stateful session lifecycles on top (init → capabilities → tool calls → progress → teardown) and you’re effectively multiplexing logical connections over a lossy transport that lacks backpressure. Without sessions, “connection identity” collapses; with sessions, you’re implementing a connection manager and a garbage collector. None of that is impossible; it’s just engineering with real tradeoffs, and the blast radius is bigger for a web product than a local client. JSON-RPC, which MCP uses, is great for stdio, single process boundary, deterministic ordering, no intermediaries and definitely made sense when MCP first launched. Over HTTP in heterogenous networks it sheds a lot of properties we rely on for observability and resilience: verb semantics (idempotency, cacheability), standardized error surfaces (415/429/503 with Retry-After), native gateway features (authN/Z policies, quotas, schema-based request validation), and commodity tooling (OpenAPI → codegen → typed clients → policy as code). That’s a big reason you don’t see JSON-RPC as the dominant web API shape in 2025. The moment you’re threading through proxies, tracing with OpenTelemetry, or enforcing per-tenant rate limits in an API gateway, REST/OpenAPI simply fits the operational model better. To clear another misconception: MCPO is not a “spec” we invented to compete with MCP. It’s a compatibility bridge, an adapter that lets us expose the OpenAPI surface we already support to MCP clients. The reason we lean into OpenAPI for internal/enterprise tooling is because it composes with everything else our users already run: org SSO, service accounts vs user delegation, policy engines, audit, pagination conventions, retries, and typed SDKs. Importantly, it lets us forward an Open WebUI session (or short-lived OAuth 2.1 token with PKCE) to your tool servers without forcing every user to perform a separate OAuth dance per MCP server. Per-user OAuth can be useful; it can also be a UX foot-gun when you deploy ten internal tools and suddenly everyone is juggling ten separate consents, ten refresh token lifecycles, and ten copies of the same identity. For many orgs, we believe the right answer is: SSO once at the UI, downstream mTLS or service principal to the tool server, and fine-grained authorization enforced by the tool’s own policy layer. OpenAPI fits that extremely well. Additionally, the MCP ecosystem is still inconsistent and starting to be come more fragmented. Some servers only speak stdio. Some expose SSE but not the newer streamable HTTP semantics. A lot of clients prioritize “tools” and punt on resources and prompts. That’s not an indictment of MCP; it’s a sign that the spec is maturing while vendors ship the 80% that matters to their users. But let’s be honest about the DX here: implementing a robust MCP client in a web context means solving session routing, cross-tab coordination, credential scoping, connection pooling, and exponential backoff across many servers at once, while preserving per-user isolation. Building an MCP server that behaves well under load means persistent event queues, bounded streaming, cancellation, progress semantics that don’t leak resources, and strict schema validation so tool misuse is fail-fast. The “look, one config file and it just works” demos are valuable, but they are not the full picture for multi-tenant deployments. Now, where I disagree with some of the recent comments is the “everyone has standardized on MCP, therefore Open WebUI must immediately drop everything and follow” framing. Large vendors have strategic alignments, some tightly coupled to specific model providers, others to particular cloud stacks, and that naturally colors what they package and promote. Momentum is real; so are incentives. Yes, MCP has financial tailwinds and real adoption. With that being said, we also see a different signal among self-hosted and enterprise users who need durable, boring integrations that survive staff turnover and quarterly audits. Those teams are doubling down on OpenAPI because it is battle-tested, the toolchain is deep, and the handoff between platform, security, and app teams is far clearer. In practice, the sustainable pattern we’re seeing is: define a well-documented OpenAPI surface with rich JSON Schemas and examples, implement auth and policy cleanly, then, if you need MCP, auto-wrap the same server. Libraries like fastapi_mcp make that straightforward. You don’t have to pick one religion. MCP has good ideas we like a lot, capability discovery, typed tool schemas, progress and cancellation semantics, and an emerging common lexicon across tool ecosystems. Those are wins. But standardization in our space should not mean “pick one vendor’s favored stack and ossify around it,” especially while the spec is still evolving in observable ways. We’re building Open WebUI to outlast this hype cycle. That means favoring protocols and operational models that are resilient under boring enterprise conditions, are easy to observe and secure, and won’t make you rewrite everything the next time a transport or session primitive changes. OpenAPI meets that bar today; MCP is getting closer, and we’ll support it carefully where it’s stable and high value. If you’re all-in on MCP, you can still get great mileage with Open WebUI: run your MCP servers, connect via streamable HTTP, and go. If you’re an org with existing OpenAPI surfaces, you can keep your current operational discipline and add MCP on the edges where it helps, without duplicating your backend. That’s the middle path we’re optimizing for, and it’s how we avoid locking you into our preferences or anyone else’s. Our plan remains pragmatic. We already support the OpenAPI tool path with system-level auth integration and SSO token forward. We just added MCP tool support in our dev branch, and we’re aligning on streamable HTTP with OAuth 2.1 now that it’s congealing. We are open to mapping “resources” when the interop story stabilizes across clients and the security model is crisp for multi-tenant web. None of this is an ideological stance; it’s a sequencing and risk posture so we can ship features that don’t collapse under the long tail of deployments. A quick note on community process: drive-by comments that restate the same talking points without engaging prior replies or reading linked threads create noise we have to triage instead of code we can ship. If you want to help us do this right, concrete technical asks are gold: which transport(s) you need, expected auth flows (user-delegated vs service principal), any constraints from your proxy/WAF, required semantics for cancellation/progress, and the minimum viable set of capabilities for your use case. That information lets us design to real constraints instead of chasing vibes. We’re not racing to be first; we’re working to be correct, secure, and maintainable for the next decade(s). If the spec continues to stabilize, we’ll keep deepening native support. In parallel, we’ll keep investing in the OpenAPI path because, for a large class of internal deployments, it remains the most economical, observable, and durable choice. The web learned these lessons once already; there’s a reason JSON-RPC never became the backbone of web service integration, even though it’s elegant for a single process boundary. Over stdio it makes sense; over the public internet it drags hidden costs you end up paying in reliability and ops. Thanks to everyone pushing on details and sharing real deployments. We might not do it fast, but we’ll do it right, and we’ll keep iterating until it’s great. Tim MCP Support in our dev branch, testing wanted here! ![]() |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I completely understand the dev having strong opinions about what they want to support, but you can't swim against the current. You can write a thousand pages saying why MCPO is superior, but the whole point of having a standard is not to have to deal with all different kinds of extra layers proxying things and adding complexity. Refusing to support MCP natively because they like REST APIs and want to push cloud support is, I can't help but think, short sighted. Hopefully they change their mind soon. Just my 2 cents.
Beta Was this translation helpful? Give feedback.
All reactions