The 2026 Solo Builder AI Stack: Picking Tools for Optionality, Not Features

The solo builder AI stack conversation has a problem. Everyone posts what they use. Nobody posts why they chose it or what they'd replace if they had to.
A stack built for features is a dependency trap. A stack built for optionality is a business asset. The difference matters when a tool you depend on changes its pricing, kills its API, or gets acquired. If you picked Claude because it was the best model last month, you have a preference. If you picked Claude because your orchestration layer treats it as a swappable slot, you have a strategy.
Here's how I think about the stack for a solo operation in 2026, and what I'd change tomorrow if any single layer disappeared.
The Principle: Every Layer Should Be Replaceable
The worst thing a solo builder can do is build load-bearing walls around a single vendor. You don't have a team to absorb a migration. You don't have months of runway to rewrite integrations. When a tool breaks or reprices, you need to swap it in days, not quarters.
This means every layer of your stack should meet three criteria:
Standard interfaces. The tool communicates through protocols that other tools also speak. REST APIs, MCP, standard auth flows. If the only way to use it is through a proprietary SDK with no alternatives, it's a lock-in risk.
Data portability. You can export your data in a format another tool can import. If your notes live in proprietary blocks that only render in one app, you don't own your notes. If your automations are visual flows that can't be described as code, you can't version or migrate them.
Skill transferability. The time you invest learning the tool teaches you patterns that apply elsewhere. Learning n8n teaches you workflow orchestration concepts that transfer to any automation platform. Learning a tool's proprietary drag-and-drop interface teaches you that tool and nothing else.
The Stack (What I Actually Use)
Layer 1: Thinking (Claude / GPT / Gemini)
This is the LLM layer. I use Claude as my primary for code generation, analysis, and writing. GPT for specific tasks where it outperforms. Gemini for image generation and multimodal work.
The key: none of these are hardcoded. My orchestration layer (more on that below) calls them through a standard chat completions interface. Switching from Claude to GPT for a specific task is a config change, not a rewrite. When Gemini shipped better image generation, I routed image tasks there without touching anything else.
If Claude disappeared tomorrow: I'd route everything through GPT within an hour. The quality would shift on some tasks. The system would keep running.
Layer 2: Building (Claude Code / Cursor)
This is where code gets written. Claude Code for vibe coding sessions where I describe what I want and iterate. Cursor for when I need to work inside existing codebases with full context.
Both tools operate on standard files in standard repos. The code they produce is just code. If Cursor shut down, I'd switch to Claude Code for everything (or whatever ships next). The output is portable because it's source code, not a proprietary artifact.
The trap to avoid: building inside a platform that generates code you can't run elsewhere. If your app only exists inside the tool that built it, you've traded optionality for convenience.
Layer 3: Orchestration (n8n)
This is the glue. n8n connects everything: APIs, databases, AI models, webhooks, scheduled jobs. It sits in the middle tier between no-code (Zapier, Make) and custom frameworks (LangChain, CrewAI).
I'll be honest: I resisted this layer for a long time. My instinct is to write scripts that hit APIs directly. If it doesn't have a REST endpoint, I move on. Visual workflow builders always felt like training wheels for people who can't write code.
n8n changed my mind because it's not replacing API calls. It's managing them. Every node in an n8n workflow is just an API call with error handling, retry logic, and conditional branching that I'd otherwise be writing and maintaining myself. The visual graph isn't a simplification. It's a monitoring surface for the same API-first architecture I'd build anyway, except I can see the state of every connection without tailing logs.
Why n8n over Zapier: n8n is source-available, self-hostable, and exposes workflows as JSON that you can version control. Zapier workflows live on Zapier's servers. If Zapier changes pricing (they have, repeatedly), your automations are hostage. With n8n, I export the JSON, spin up a new instance, import, done.
Why n8n over raw scripts: I still write scripts for one-off tasks. But for anything with 3+ API integrations that needs to run reliably on a schedule, the maintenance cost of a custom script (error handling, retries, logging, monitoring) exceeds the learning cost of n8n. I learned this the hard way by maintaining a pipeline that called four APIs in sequence, and every time one of them changed response formats, I spent a Saturday debugging instead of building.
If n8n disappeared tomorrow: I'd move to Make for simple flows and fall back to custom scripts for complex ones. The workflow logic is documented in exportable JSON. The migration would take a week, not a month.
Layer 4: Backend (Supabase)
Database, authentication, storage, and API. Supabase is Postgres underneath, which means the data layer is standard SQL. If Supabase disappeared, the migration path is: export the Postgres dump, import into any managed Postgres host (Neon, Railway, raw RDS), update connection strings.
The trap to avoid: Firebase. Firebase stores data in a proprietary document format that doesn't map cleanly to anything else. Migrating off Firebase is a rewrite. Migrating off Supabase is a config change.
Layer 5: Deployment (Vercel / Cloudflare)
Static sites and serverless functions. Both deploy from Git. Both use standard web technologies. Switching between them is a matter of updating build commands and DNS records.
For this blog, I use Cloudflare Pages. For the content studio, Vercel. The choice is driven by specific features (Cloudflare's edge caching, Vercel's Next.js optimization), not by lock-in. Both ship standard HTML/CSS/JS from a standard Git repo.
What's Intentionally Not in the Stack
No-code app builders (Bubble, Glide, Retool). The apps they produce can't run outside their platform. You're renting your product. For prototyping, fine. For a business asset, no.
Proprietary AI wrappers with no export. Tools that wrap an LLM behind their own interface without letting you access the underlying API are taking a margin for convenience. That's fine if you're aware of it. It's a trap if you're not.
Monolithic platforms that do everything. The platform that handles your database AND your auth AND your hosting AND your AI AND your deployment is optimizing for their retention, not your flexibility. When one layer underperforms (and one always will), you're stuck choosing between a bad tool and a full migration.
The Optionality Test
Before adding any tool to your stack, ask three questions:
-
What happens if this tool doubles its price next month? If the answer is "I'd pay it because migration is too expensive," you have a dependency, not a tool.
-
Can I export everything I've built and run it somewhere else? If the answer requires more than a day of work, the tool owns more of your business than you do.
-
Does learning this tool teach me something transferable? If the skills only apply inside this specific product, you're investing in their moat, not yours.
A stack that passes all three is a stack that compounds your optionality over time instead of eroding it.
The Actual Cost
Running this stack as a solo builder:
- Claude / GPT / Gemini: $20-60/month depending on usage tiers
- Cursor: $20/month (or free tier for light use)
- n8n: Free (self-hosted) or $20/month (cloud)
- Supabase: Free tier covers most solo projects. $25/month when you scale.
- Vercel / Cloudflare: Free tier for most use cases. $20/month if you need more.
Total: $60-145/month for a full AI-augmented development and automation stack.
For comparison: a single freelance developer costs $50-150/hour. A no-code platform like Bubble runs $29-119/month but your app can't run anywhere else. A bootcamp costs $10K-20K and teaches you skills that may be outdated by the time you graduate. This stack costs less than one hour of consulting billing per month, the skills transfer across every tool in it, and everything you build is portable.
The leverage is absurd if you use it to ship products instead of just experimenting.
What I'd Change
If I were starting from zero today, two things would be different.
First, I'd skip Zapier entirely and start with n8n. I picked Zapier because it was easier to set up. Two months later I'd outgrown it: too many zaps, pricing tiers climbing, and no way to version control my workflows. The migration to n8n took a full week of rebuilding flows I'd already built once. That week was the tax for choosing the path of least resistance.
Second, I'd standardize my API integration patterns before building anything else. My early projects had four different ways of handling auth tokens, three different retry strategies, and zero consistent error handling. Every new integration was a custom snowflake. The orchestration layer eventually forced consistency, but I would have saved months of debugging if I'd established the patterns upfront.
The orchestration layer is the skeleton of a solo operation. Everything else hangs off it. Get that right first, even if it's harder to learn, and the rest of the stack becomes genuinely swappable.