Aider
AI pair programming in your terminal—free, open-source, any LLM
Open WebUI is a self-hosted, extensible AI chat platform that runs fully offline with Ollama or any OpenAI-compatible API. With 133k GitHub stars and enterprise-grade RBAC, SSO and RAG built in, it's the default frontend for the local LLM era.
Open WebUI is an open-source, self-hosted AI chat platform that runs a private ChatGPT-style interface against Ollama, Claude, OpenAI and any OpenAI-compatible API — entirely on your own hardware, entirely offline if you want it. We rate it 86/100 — it is the closest thing the open-source world has to a default frontend for the local LLM era, provided you are comfortable running a Docker container and can live with a support model that mostly means opening GitHub issues.
Open WebUI was created by Timothy Jaeryang Baek in early 2024 as "Ollama WebUI" — a side project to give the then-new Ollama runner a usable browser interface. It was renamed to Open WebUI later that year once the team added OpenAI-compatible API support, and has since grown into a full application with a team behind it, an enterprise tier and one of the most active communities in open-source AI.
As of the project sits at 133,000+ GitHub stars, ships releases roughly every two weeks, and the latest stable is v0.9.1, published on . The pitch is simple: if you have a GPU at home or a private server, you should not have to choose between ChatGPT Pro and a bare terminal — Open WebUI is the missing UI layer that turns whatever model you want to run into something your non-technical teammates will actually use.
ghcr.io/open-webui/open-webui:latest), with :ollama and :cuda variants that bundle the runtime. No telemetry, no outbound calls required — all data stays on your box.
On r/LocalLLaMA, Open WebUI is effectively the default recommendation for anyone asking "what should I put in front of Ollama?" — and a recent thread praising v0.9's RAG overhaul cleared 2,000 upvotes. Praise clusters around polish ("looks and feels like ChatGPT"), the breadth of providers, and the fact that SSO/RBAC are free. On Hacker News, a v0.6 launch thread drew comments like "Open WebUI is one of those projects where every release makes me slightly less excited about my ChatGPT subscription."
Complaints are real and recurring. The v0.6.6 license change in — requiring branding to stay visible for deployments over 50 users unless you buy an enterprise license — triggered sharp debate on GitHub, with some forks spinning up (notably LibreChat's continued growth). Users also note that observability is thin compared to commercial platforms (basic logging, no usage attribution per user), and that the upgrade path between major versions has shipped migration bugs more than once.
The core platform is free under the Open WebUI License (a permissive license with a branding-protection clause for large deployments). You pay only for the infrastructure you run it on, and — if you use cloud models — whatever the upstream API charges. Optional managed and enterprise tiers are available for teams that want white-labeling or SLA support.
| Plan | Price | Key Limits |
|---|---|---|
| Self-hosted (OSS) | $0 | Unlimited users and models. Must keep Open WebUI branding visible for deployments over 50 users. |
| Community hosted | from $9.99/month | Managed instance for solo users and small teams who don't want to run the container themselves. |
| Enterprise | Custom (contact sales) | White-labeling, LTS releases, SLA support, priority security patches, SCIM/OIDC hardening. Custom-quoted by seat count. |
Best for: Developers, homelabbers and small engineering teams who want a polished ChatGPT-style UI over local models or private API keys, and mid-sized enterprises that need SSO/RBAC over LLMs without paying per-seat commercial pricing. It is especially strong for regulated industries (healthcare, legal, government) that cannot send data to OpenAI but still want a modern chat experience over their own inference cluster.
Not ideal for: Companies that need granular per-user token attribution, chargeback billing, or complex audit logs out of the box — the enterprise story there is still catching up to commercial alternatives. Also not the best fit for users who refuse to touch Docker; while the installer is simple, it still assumes you can run a container.
Pros:
Cons:
LibreChat is the most popular true-OSS fork-adjacent alternative — fully MIT-licensed with no branding clause, though the UI is a step behind and RAG is less polished. AnythingLLM (Mintplex Labs) focuses more on document-centric RAG workflows and less on the general chat experience. Onyx (formerly Danswer) is the enterprise-search-first choice with better ACLs but a heavier footprint. For raw chat against Ollama, the upstream Ollama app itself is fine but lacks RAG, multi-user and tool calling.
Yes — with one caveat. If you are running LLMs on your own hardware or through API keys and you don't need per-user billing or advanced audit features, Open WebUI is the highest-leverage piece of software you can install in 2026. A single docker run gets you a chat UI better than most commercial products, RAG over your documents, voice calls, SSO, and the ability to swap models by the sentence. The enterprise pricing opacity and the branding clause are real frictions for organisations above 50 users, which is the only reason we land at 86/100 rather than 90+. For everyone else, it is simply the default.
AI pair programming in your terminal—free, open-source, any LLM
AI ToolsThe AI notepad for back-to-back meetings — bot-free capture, human-AI hybrid notes
AI ToolsFree, open-source platform for running LLMs locally — privacy-first AI with zero cost
AI ToolsPower AI agents with clean web data — the web scraping API built for LLMs
Google Makes Ironwood TPU Generally Available and Splits TPU 8 Into Training and Inference Chips (April 2026)
At Google Cloud Next on April 22, 2026, Google made its seventh-generation Ironwood TPU generally available and previewed an eighth-generation architecture split into a Broadcom-designed training chip (TPU 8t "Sunfish") and a MediaTek-designed inference chip (TPU 8i "Zebrafish"). Anthropic will take up to one million TPU chips as part of the rollout.
Apr 22, 2026
Apple Names John Ternus CEO as Tim Cook Moves to Executive Chairman (April 2026)
Apple on April 20, 2026 announced that hardware chief John Ternus will become chief executive officer on September 1, 2026, with Tim Cook moving to executive chairman after nearly 15 years leading the company — the most consequential tech CEO handover of the decade.
Apr 22, 2026
Kubernetes 1.36 Released: HPA Scale-to-Zero, User Namespaces and OCI Volumes Go Stable (April 2026)
The Kubernetes community shipped v1.36 on April 22, 2026, with 80 enhancements including HPA scale-to-zero enabled by default, stable user namespaces and OCI volumes, the retirement of Ingress-NGINX, and the removal of the gitRepo volume plugin.
Apr 22, 2026
Is this product worth it?
Built With
Compare with other tools
Open Comparison Tool →