Aider
AI pair programming in your terminal—free, open-source, any LLM
Continue is the open-source, BYO-model AI coding agent for VS Code, JetBrains, and CI pipelines. Apache 2.0 licensed, free for individuals, with paid Hub credits and team features.
Continue is an open-source AI coding platform that started as a Chat/Agent extension for VS Code and JetBrains and has expanded into a CLI plus source-controlled "AI checks" enforced on every pull request. We rate it 83/100 — the strongest open-source alternative to GitHub Copilot and Cursor for teams that want bring-your-own-model freedom and version-controlled AI behavior.
Continue was founded in 2023 by Nate Sesti and Ty Dunn and is built by Continue Dev, Inc. The core repo at continuedev/continue went public in and has crossed 32,800+ stars and 3,500+ forks. It is licensed Apache 2.0 and ships three things: an IDE extension for VS Code and JetBrains, an open-source CLI named cn, and the Continue Hub — a registry of shareable agents, models, rules, and prompts.
Where Cursor forks VS Code and Copilot ships as a closed extension, Continue stays vendor-neutral. You bring your own keys for OpenAI, Anthropic, Google, Mistral, Bedrock, Azure, xAI, or Ollama, and you pay those providers directly. The 2026 product pivot — "Continuous AI" — moves agents out of the editor and into your CI: each repo defines markdown agents in .continue/checks/ that run on every pull request as GitHub status checks, green or red with a suggested diff.
.continue/checks/; the Continue CLI runs them as GitHub status checks on every pull request and posts review comments with suggested diffs.cn): Run agents from your terminal, orchestrate multi-file edits, and integrate Continue into custom scripts and CI pipelines.
On Reddit's r/LocalLLaMA, Continue is the most-recommended option whenever someone asks for a Copilot replacement that works with Ollama or LM Studio — top threads call out the easy local model setup and the freedom to mix providers. The original Show HN thread from August 2023 hit 200+ points, with HN commenters praising the open-source license but flagging early-version rough edges.
Recurring criticism in 2026 falls into three buckets. First, autocomplete latency is still behind Cursor's Tab model when running smaller local LLMs — heavy users typically pair Continue with a hosted Claude or GPT key for inline completion. Second, the early-2026 pivot toward "AI checks in CI" confused some long-time IDE users; a popular Reddit thread argued the marketing now buries the IDE extension that brought most people in. Third, the JetBrains plugin lags the VS Code extension by a release or two, particularly for new agent features. The team has been responsive on these in GitHub Discussions.
The Continue extensions and CLI are free and open source. You only pay if you use Continue Hub's hosted credits or seat-based team features.
| Plan | Price | What's Included |
|---|---|---|
| Open-source | $0/month | VS Code & JetBrains extensions, the cn CLI, Apache 2.0 license. Bring your own API keys or run local models with Ollama. |
| Starter | $3 / million tokens, pay-as-you-go | Hub access, agent runtime, integrations (Slack, Sentry, Snyk, Linear), credits for frontier models. No monthly minimum. |
| Team | $20 / seat / month | Includes $10 in model credits per seat, private agent sharing, agent-allow lists, Gmail/GitHub SSO. Everything in Starter. |
| Company | Custom | SAML/OIDC SSO, bring-your-own-API-keys, contractual commitments, invoicing, SLA, dedicated support. |
Best for: Engineers who already have an Anthropic, OpenAI, or Gemini key and want the best open-source AI coding UX without paying a second subscription; teams that want their AI behavior version-controlled in the repo; security-conscious shops that need local-only inference on Ollama.
Not ideal for: Solo developers who want zero-config "install and it just works" AI — Cursor or Copilot are simpler. Also not ideal for teams whose codebase is so large that @codebase retrieval struggles without a hosted vector index.
Pros:
.continue/ — your AI behavior is reviewable, diffable, and reproducible.Cons:
Yes, especially if you already pay for an LLM API. Continue gives you a polished, open-source IDE experience plus a CI agent runner without locking you to one model vendor. The 83/100 reflects two real friction points — slower local autocomplete and a marketing message that pulled focus from the IDE — but neither outweighs the strategic value of an Apache-licensed, BYO-model platform you can extend, fork, or self-host. Install the VS Code extension, point it at your existing API key, and you will know within 30 minutes whether it replaces Copilot for you.
cn CLI are all free and open source under Apache 2.0. You pay nothing if you bring your own API keys or run local models with Ollama. Continue Hub credits start at $3 per million tokens, and the Team plan is $20 per seat per month.cn CLI.Meta Signs Multibillion-Dollar AWS Graviton Deal — Tens of Millions of Cores for Agentic AI (April 2026)
Meta and AWS announced on April 24, 2026 a multi-year agreement deploying tens of millions of AWS Graviton5 cores for Meta's agentic AI workloads — a CPU-side bet that lands as Meta diversifies beyond its $135B AI capex and $10B Google Cloud deal.
Apr 27, 2026
Tesla Buries $2 Billion AI Hardware Acquisition in a Single 10-Q Sentence (April 2026)
Tesla disclosed an up-to-$2 billion AI hardware acquisition in a single sentence in Note 14 of its Q1 2026 10-Q — never mentioning the deal on its earnings call. Only $200M is guaranteed; the rest is tied to performance milestones.
Apr 26, 2026
X-energy Raises $1B in Largest-Ever Nuclear IPO, Closes Up 27% on Nasdaq Debut (April 2026)
Amazon-backed SMR developer X-energy priced its IPO at $23 — 21% above range — to raise $1.017B and traded as high as 36% above offer on April 24, 2026. The largest nuclear public offering on record signals that AI data-center demand has finally made small modular reactors financeable.
Apr 26, 2026
Is this product worth it?
Built With
Compare with other tools
Open Comparison Tool →