Aider
AI pair programming in your terminal—free, open-source, any LLM
Mastra is an open-source TypeScript framework for building production AI agents and workflows. Built by the team behind Gatsby, it ships agents, graph workflows, RAG, evals, and a local studio in one cohesive package.
Mastra is an open-source TypeScript AI agent framework that bundles agents, workflows, RAG, memory, evals, and observability into a single coherent stack — built by the team behind Gatsby. We rate it 86/100 — the most polished and ergonomic option for JavaScript and TypeScript teams shipping LLM agents to production.
Mastra is a Node.js framework for building AI agents and agentic workflows in TypeScript. It was created by Sam Bhagwat, Smith Maru, and Abhi Aiyer — the same team that founded Gatsby — and incubated at Y Combinator. The repository was first published on , the project went public in October 2024, and the team shipped the long-awaited v1.0 release in January 2026. By that point Mastra had crossed 22,000 GitHub stars and roughly 300,000 weekly npm downloads.
The pitch is simple: most agent frameworks (LangChain, LlamaIndex, CrewAI) were born in Python and ported reluctantly to JavaScript. Mastra was designed TypeScript-first from day one, so types flow end to end and the API feels native to the Node.js, Next.js, and Vercel-style ecosystem rather than a translation layer.
.then(), .branch(), and .parallel() for deterministic orchestration when you do not want the model deciding control flow.npm create mastra@latest, start the dev server, and a local UI at http://localhost:4111 lets you chat with agents, replay traces, and inspect workflow state interactively.
On Hacker News and r/LocalLLaMA, the recurring praise is the developer experience: end-to-end TypeScript types, fast cold starts, and a studio that actually surfaces what the agent did. Independent benchmarks circulating in early 2026 reported P95 latency of ~1,240ms versus ~2,450ms for LangChain, a build-time of 18 hours versus 41 hours on a comparable RAG agent, and lower error rates (5.8% vs 8.9%) — numbers worth treating as directional rather than gospel, but they match the ergonomic story users tell.
The honest complaints are also consistent. Several breaking changes between v0.3 and v0.4 (notably the workflow API rewrite) burned early adopters, and integration coverage is much narrower than LangChain's hundreds of connectors — Mastra ships closer to 50–60. Python-shop teams also note that the TypeScript-only stance creates a real coordination cost when ML engineers think in notebooks.
The framework itself is fully open source under Apache 2.0. Mastra Cloud is the optional managed deployment layer, and the public pricing page lists three tiers:
| Plan | Price | Key Limits |
|---|---|---|
| Starter | $0 | Unlimited users and deployments, 100,000 observability events, 24 hours of CPU uptime, 10 GB egress |
| Teams | $250 per team / month | Multiple teams, custom SSO, SOC 2 documentation, 250 hours of CPU time, 100 GB egress |
| Enterprise | Custom | RBAC, support SLA, dedicated support engineer, custom CPU and egress |
Add-ons are billed à la carte: $10 per 1M model tokens, $10 per 100,000 observability events, $100 per additional project, and $10 per GB of extra egress.
Best for: TypeScript and JavaScript teams shipping production LLM features inside Next.js, Node.js, or React stacks; founders building agentic SaaS who want types, evals, and tracing without gluing five libraries together; engineers who want an MCP server with one decorator instead of a sidecar.
Not ideal for: Python-first AI teams with existing LangChain, LlamaIndex, or DSPy infrastructure; projects that depend on the long tail of LangChain integrations Mastra has not yet wrapped; teams that cannot tolerate any further API churn before v1.x stabilizes.
Pros:
localhost:4111 gives you a real playground and trace viewer without paying for SaaS.Cons:
Langfuse covers tracing and evals but does not give you agents or workflows. LangChain.js still has the broadest integration catalog, at the cost of a heavier and more Python-shaped API. Vercel's AI SDK is lighter and great for chat UIs, but stops short of Mastra's workflow engine, memory, and MCP authoring story.
If you are writing TypeScript and you want to ship an LLM agent or agentic workflow to production this quarter, Mastra is the most coherent option on the market in 2026. The framework rewards teams who pick one stack and commit to it; it is less suited to polyglot AI shops that already lean on Python tooling. We rate it 86/100 — held back from the 90s only by the still-young integration catalog and the API churn that preceded v1.0, both of which look likely to improve through 2026.
ee/ directory uses a separate Mastra Enterprise License for production use.AI pair programming in your terminal—free, open-source, any LLM
AI ToolsAll-in-one open-source AI app to chat with your docs, run agents, and connect any LLM — local-first.
AI ToolsThe most realistic AI voice generator and voice agents platform
AI ToolsThe AI notepad for back-to-back meetings — bot-free capture, human-AI hybrid notes
GitHub Copilot Code Review Will Start Consuming GitHub Actions Minutes on June 1, 2026 (April 2026)
GitHub on April 27, 2026 said its agentic Copilot code review will begin consuming GitHub Actions minutes on every private-repo pull request from June 1, 2026 — billed alongside new AI Credits. Public repos stay free, but Pro, Pro+, Business, and Enterprise customers will pay on two meters.
Apr 28, 2026
Microsoft Confirms Windows Shell CVE-2026-32202 Is Being Actively Exploited - Zero-Click NTLM Leak From Incomplete APT28 Patch (April 2026)
Microsoft on April 27 updated its advisory for CVE-2026-32202, a zero-click Windows Shell flaw that leaks NTLM credentials. Akamai says the bug is the residual exploit path left over from an incomplete February patch for an APT28 zero-day.
Apr 28, 2026
Sereact Raises $110M Series B to Scale Cortex 2.0 Robotic Brain
German physical-AI startup Sereact raised a $110M Series B led by Headline on April 26, 2026, to scale its Cortex 2.0 world-model robotic brain and open a first US office in Boston. The company has 200+ systems live across Europe at BMW, Daimler Truck, Mercedes-Benz, and PepsiCo, with over 1 billion real production picks completed.
Apr 28, 2026
Is this product worth it?
Built With
Compare with other tools
Open Comparison Tool →