Aider
AI pair programming in your terminal—free, open-source, any LLM
LM Studio is the most polished desktop GUI for running open-source LLMs locally. Free for personal and commercial use as of mid-2025, it now powers everything from private chat to a full OpenAI-compatible local server.
LM Studio is the closed-source but generously free desktop app that turns any modern Mac, Windows or Linux laptop into a private OpenAI server. Browse Hugging Face from a clean GUI, download a quantized model, chat in a familiar UI, and expose an OpenAI-compatible API on localhost — that's the whole pitch. We rate it 87/100: for users who want a polished way to run gpt-oss, Llama, Gemma, Qwen and DeepSeek on their own hardware, it is the most refined option in 2026.
LM Studio is a desktop application built by a small team led by founder Yagil Burowski. The app first appeared in and has shipped more than thirty 0.3.x updates through 2025, with the latest release — 0.3.39 — landing on with Open Responses support for local models. Earlier 0.3 milestones added Model Context Protocol (MCP) Host support, built-in retrieval-augmented generation (RAG), structured outputs and a CLI for streaming server logs.
Instead of asking you to learn a CLI, LM Studio gives you a Hugging Face browser, one-click GGUF or MLX downloads, sensible defaults for GPU offload and quantization, and a familiar chat window. On Apple Silicon it ships an optimized MLX runtime; on x86 it leans on llama.cpp. The most strategically important update came on , when LM Studio dropped the commercial-license requirement and became free for use at work — no form, no contact, no sales call.
/v1/chat/completions, /v1/embeddings and the new /v1/responses on localhost:1234, so existing OpenAI SDK code points at LM Studio with a one-line base URL change.lms CLI streams server logs, loads/unloads models and powers the structured outputs API for JSON-typed responses.
Sentiment is broadly positive with one persistent caveat. On r/LocalLLaMA, the most upvoted comparisons describe LM Studio as "the most polished local LLM interface" and "running in under five minutes" — the GUI is a clear differentiator versus Ollama for non-developers. XDA Developers' hands-on review concluded that LM Studio "proved" research-grade local LLMs are now viable.
The recurring complaint is that LM Studio is not open source. r/LocalLLaMA and Hacker News threads regularly note distrust around closed-source releases and fear of future paywalls. A second cluster of complaints concerns hardware: 7B models are comfortable on 16 GB of RAM, but anything past 13B unquantized starts to swap painfully. Multiple testers also note that local 8B models still trail GPT-5.2 or Claude on complex reasoning.
Pricing is the simplest part of the story: the app is free, the work license is free, and there is no metering. Revenue comes from the optional Enterprise tier.
| Plan | Price | Key Limits |
|---|---|---|
| Personal | $0 | All features, unlimited local usage, public Hub |
| Work | $0 | Same as Personal — no commercial license required since July 2025 |
| Enterprise | Contact | SSO, MCP/model gating, private team Hub, support SLAs |
Best for: developers, researchers and curious power users who want a friction-free way to evaluate open-weight models, point existing OpenAI SDK code at a private endpoint, or run RAG over sensitive documents that cannot leave the laptop. It is also the right pick for non-developers who want ChatGPT-style chat without the cloud bill.
Not ideal for: teams that require a fully open-source stack for compliance — Ollama or llama.cpp are the honest answers there. It's also a poor fit for budget hardware: anything under 16 GB RAM with no dedicated GPU will struggle past tiny 1B–3B models.
Pros:
Cons:
Ollama remains the developer-first, fully open-source alternative — better for scripted workflows, weaker as a GUI. Jan is the closest open-source clone of LM Studio's UX with a smaller model catalog. Open WebUI is especially strong as a multi-user front end on top of Ollama.
For a free desktop app, LM Studio's combination of polish, model catalog and developer features is hard to beat in 2026. The 87/100 reflects an app that does almost everything we want a local-LLM runtime to do — except be open source. If that single caveat is a deal-breaker, run Ollama or Jan; otherwise LM Studio is the most pleasant way to put a private GPT-class assistant on your own laptop.
lms CLI is also available for headless workflows.Firestorm Labs Raises $82M Series B to Ship Container-Sized Drone Factories to the Front Line (April 29, 2026)
San Diego defense startup Firestorm Labs closed an $82 million Series B led by Washington Harbour Partners on April 29, 2026, taking its total funding to $153M and pushing its xCell shipping-container drone factories — which 3D-print Tempest UAVs in nine hours — toward forward-deployed military use.
May 5, 2026
Lovable Ships Mobile Vibe-Coding App on iOS and Android — Web-Only Previews to Skirt Apple's Crackdown (April 27, 2026)
Lovable launched its no-code AI app builder on the iOS App Store and Google Play on April 27, 2026, with voice-and-text prompts, queued generations, and cross-device pickup. Generated previews open in the browser — a deliberate workaround for Apple's recent crackdown on vibe-coding apps that blocked Replit, Vibecode, and Anything.
May 5, 2026
Anthropic Launches Claude Security in Public Beta — Opus 4.7 Vulnerability Scanner Now Open to Enterprise Customers (April 30, 2026)
Anthropic moved Claude Security out of closed preview on April 30, 2026, opening its Opus 4.7-powered vulnerability scanner to all Claude Enterprise customers — and triggering OpenAI to fast-track a GPT-5.5-Cyber response.
May 5, 2026
Is this product worth it?
Built With
Compare with other tools
Open Comparison Tool →