Digestify

A 3-step AI coding workflow for solo founders

Structured Vibe Coding

Educational summary of A 3-step AI coding workflow for solo founders hosted in YouTube. All rights belong to the original creator. Contact me for any copyright concerns.

Youtube URL: https://www.youtube.com/watch?v=fD4ktSkNCw4

Host(s): Claire (How I AI)

Guest(s): Ryan Carson

Podcast Overview and Key Segments

Overall Summary

Claire hosts Ryan Carson, a five-time founder, to unpack a simple, rigorous, three-step AI coding workflow inside Cursor. Ryan shows how to turn “vibe coding” into a repeatable system: write a PRD, generate a task list, and execute tasks one by one with rules and human checkpoints. The episode dives into context control, prompt design, MCP integrations, and model choice. Ryan demos PRD and task list prompts, task execution rules, and a headless browser MCP for front-end testing. He also shares Repo Prompt for exact context packing. The core message: slow down, set context, give AI a clear plan, and keep a human in the loop. With this, a solo founder can ship serious features fast, cut toil, and stay in control.

Reference

  • PRD: Product Requirements Document. A clear spec for a feature. Includes goals, scope, and requirements.
  • Cursor: A VS Code–style AI IDE with chat, agent mode, and “rules” files.
  • Context window: The text the model “sees.” More relevant context leads to better results.
  • MCP (Model Context Protocol): Lets AI tools call external services (e.g., databases, browsers).
  • Task list with checkboxes: Markdown list of tasks that can be checked as work completes.
  • Agent mode: Cursor mode where the AI proposes and applies code changes.
  • Headless browser: A browser that runs without a UI, useful for automated tests.
  • Prisma schema: Database schema for Prisma ORM.
  • Linter: A tool that checks code style and errors.
  • Tokens: Units that measure prompt size and context cost.

Key Topics

The 3-step AI coding workflow

Ryan’s method is simple and strict. Step 1: Generate a PRD from a rule file. Keep it at a “junior developer” level to enforce clarity. Make the AI ask numbered clarifying questions (2.1, 2.2, etc.). Step 2: Generate a detailed task list from the PRD. The rule defines format, checkboxes, and when to ask for “go” before creating subtasks. Step 3: Execute the task list with a “task list management” rule. Do one subtask at a time. Stop after each, mark it complete, and wait for “go.” Ryan commits to git after a parent task when the app is stable. This avoids chaos, reduces rework, and turns AI into a reliable partner rather than a vibe-only coder.

Context is king

Rushing context is the biggest mistake. Ryan uses dedicated rule files and tags PRDs and tasks into the chat. He adds “Relevant files” headers to help the model focus. He favors dot-numbered questions and junior-level language. For heavy lifts, he uses Repo Prompt to select exact files, pack them with XML-like tags, and paste into a model with a big context window. This removes black-box guessing. The result is fewer misfires, fewer rabbit holes, and faster end-to-end delivery.

From vibe coding to structured prompting

Vibe coding is fun, but it does not scale. Ryan replaces it with explicit instructions, clear process, and human checkpoints. He treats the AI like a very smart student who still needs guidance. He defines file formats, steps, and stop points. He uses short loops: clarify, plan, execute, verify. He keeps tasks small and checks results after each subtask. This reduces drift, avoids broad rewrites, and keeps momentum. Structure speeds things up because it prevents costly resets.

MCP-powered development and testing

Ryan shows Browserbase MCP to control a headless browser in the cloud. It can navigate pages, click, and take screenshots from inside Cursor. This will unlock better front-end testing and UI debugging. He also uses a Postgres MCP daily to inspect data without writing SQL. Prisma and SQLite MCPs support smaller projects. The point: reduce tab-switching and toil. Bring database checks, browser actions, and code edits into one chat. This tight loop drives speed and keeps focus.

Model choices, costs, and when to switch

Ryan often uses Gemini 2.5 Pro in Max mode despite cost. He spends $300–$400 per month and finds it worth it. Claire defaults to OpenAI o3, and switches to Claude 3.7 Sonnet Max if o3 stalls. Ryan recommends picking a model, learning its strengths, and sticking with it. He uses reasoning tokens when visibility is helpful and the extra cost is justified. The message: use the best tool for the job, but be conscious of cost and performance trade-offs.

Product management inside the IDE

Even if you are not coding, the PRD + task list setup is a huge win. The system turns specs into epics and tasks with clear steps aligned to the codebase. It clarifies dependencies, creates a shared plan, and makes “who breaks down the work?” a non-issue. Ryan keeps it simple: markdown over heavy PM tools. He adds or edits tasks in place. He keeps a human-in-the-loop and only commits when stable. This blends PM discipline with dev speed.

Key Themes

Structure beats vibes

A clear plan and tight loops outperform pure exploration. PRDs, task rules, and stop points convert AI power into shippable code. The gains come from fewer resets and better context.

  • Quotes:
    • “If we all just slow down a tiny bit and do these two steps, it speeds everything up.”
    • “This is the way people I’m telling you. Pay attention.”

Context control drives quality

Control what the model sees. Use rules, file tags, and tools like Repo Prompt. Mark files, define formats, and keep language simple. Heavy tasks need exact context packing.

  • Quotes:
    • “You really have to get good about context.”
    • “Sometimes you really really want to control the context.”

Agentic thinking without full agents

Define a process with check-ins. Ask for “go” before big steps. Do one subtask at a time. Let the model propose, but keep a human gate on execution. This reduces drift and bugs.

  • Quotes:
    • “Stop after each subtask and wait for the user’s go ahead.”
    • “I still feel like this human in the loop part is really important.”

Solo founders can build end-to-end

AI lets one person handle PM, eng, and testing to a strong standard. Not perfect, but good enough to ship. This changes staffing, time-to-market, and capital needs.

  • Quotes:
    • “I literally feel like I’m able to do all of it.”
    • “Am I able to think as deeply as a CTO? No. But I am able for sure to build this company.”

Cut toil with integrated tools

MCPs bring browsers, databases, and more into the chat. This reduces context switching and saves time. Small wins compound into faster delivery.

  • Quotes:
    • “It just reduces toil.”
    • “I want to be able to tell the AI… I don’t want to have to actually write SQL.”

Key Actionable Advise

Key Problem

Rushing context causes poor outputs and rework.

  • Solution
    • Create rule files for PRD, task generation, and task execution.
  • How to Implement
    • Write a PRD rule that asks numbered clarifying questions and targets a junior developer level. Use “Include” to tag rule files and PRD into Cursor context.
  • Risks to be aware of
    • Overly long PRDs. Keep scope tight and defer non-essentials.

Key Problem

PRDs are not broken into executable steps.

  • Solution
    • Generate a markdown task list with checkboxes, epics, and subtasks.
  • How to Implement
    • Use a “generate tasks” rule that defines format and a “respond with ‘go’ to proceed” gate. Store tasks in a /tasks folder.
  • Risks to be aware of
    • Over-granular tasks. Balance clarity and speed.

Key Problem

AI drifts when executing multiple steps at once.

  • Solution
    • Enforce one-subtask-at-a-time with stop points.
  • How to Implement
    • Use a “task list management” rule that marks completion, stops, and waits for “go.” Review changes and run tests at each stop.
  • Risks to be aware of
    • Slower if subtasks are too small. Tune size by iteration.

Key Problem

Front-end bugs are hard to diagnose from chat alone.

  • Solution
    • Use a headless browser MCP for live navigation and screenshots.
  • How to Implement
    • Configure Browserbase MCP in Cursor. Script steps like “navigate,” “click,” and “capture.”
  • Risks to be aware of
    • Flaky selectors or environments. Stabilize test flows and error handling.

Key Problem

Database checks slow down work.

  • Solution
    • Use Postgres MCP to query via natural language.
  • How to Implement
    • Configure the Postgres MCP. Ask the AI to check data presence or values as part of the task loop.
  • Risks to be aware of
    • Data safety. Use read-only access in non-prod where possible.

Key Problem

Context management is a black box.

  • Solution
    • Use Repo Prompt to pack exact files into the prompt.
  • How to Implement
    • Select files, trim generated content, and copy a composed prompt into a big-context model (e.g., o3, Claude 3.7, Gemini 2.5).
  • Risks to be aware of
    • Token costs. Keep files focused and remove generated artifacts.

Key Problem

Unclear commit strategy creates messy history.

  • Solution
    • Commit after stable parent tasks.
  • How to Implement
    • Define “stable” (builds, tests pass, lints clean). Commit parent tasks with clear messages. Revert if a chain fails.
  • Risks to be aware of
    • Too few commits. If risk rises, commit more often.

Noteworthy Observations and Unique Perspective

  • “Junior developer” framing yields clearer, more grounded PRDs.
    • Quote: “Saying junior developer is kind of a way to instruct the AI, let’s keep this at a certain level.”
  • Numbered clarifying questions prevent confusion.
    • Quote: “I want these questions to be dot notation… it becomes hard to use otherwise.”
  • Markdown tasks beat complex PM tools for speed.
    • Quote: “It’s actually easier for me just to see a markdown file and know what’s happening.”
  • Headless browser control in the cloud from the IDE is a big step for testing.
    • Quote: “This is going to unlock a huge amount of front end testing for me.”
  • Polite persistence helps course-correct the AI.
    • Quote: “Please think harder about this. I believe you can do this.”

Companies, Tool and Entities Mentioned

  • Cursor
  • ChatPRD (chaturd.ai)
  • Notion
  • Gemini 2.5 Pro (Google)
  • OpenAI o3, o1 Pro
  • Claude 3.7 Sonnet Max (Anthropic)
  • Taskmaster (open-source tool)
  • Repo Prompt (Mac app)
  • MCP, Browserbase MCP, Stagehand MCP
  • Postgres, Prisma, SQLite
  • Vercel
  • Slack, Linear, Confluence, Google Drive
  • v0.dev
  • OpenAI, Ramp, Vercel, Cursor (as Notion customers)
  • X (Twitter)

Linkedin Ideas

  1. Title: The 3-step AI coding loop that replaces vibe coding
  • Main point: PRD → task list → one-subtask-at-a-time execution is faster and safer than vibing.
  • Core argument: Structure reduces resets and raises quality.
  • Quotes: “If we all just slow down… it speeds everything up.” “This is the way.”
  1. Title: Context is king: how to brief AI like a pro
  • Main point: Use rule files, dot-numbered questions, and junior-level language. Pack exact files with Repo Prompt.
  • Core argument: Better context produces better code, faster.
  • Quotes: “You really have to get good about context.” “Sometimes you really really want to control the context.”
  1. Title: Bring testing into your IDE with MCP
  • Main point: Browserbase + Postgres MCPs cut toil and reduce tab chaos.
  • Core argument: One chat to navigate UI, query data, and fix code.
  • Quotes: “It just reduces toil.” “This is going to unlock a huge amount of front end testing.”
  1. Title: Solo founder, full stack: what AI changes in company building
  • Main point: AI lets one person handle PM, eng, and testing to a strong level.
  • Core argument: Fewer hires, faster cycles, lower costs.
  • Quotes: “I’m able to do all of it.” “I am able for sure to build this company.”
  1. Title: The gentle art of correcting your AI
  • Main point: Polite, clear prompts and stop points work better than frustration.
  • Core argument: Treat AI like a smart junior; set process; nudge back on track.
  • Quotes: “Please think harder about this.” “I believe you can do this.”

Blog Ideas

  1. Title: From vibes to velocity: a playbook for AI-driven shipping
  • Main point: Turn chaos into output with PRDs, task rules, and human stop points.
  • Core argument: Structure beats speed when speed causes rework.
  • Quotes: “Stop after each subtask and wait for the user’s go ahead.”
  1. Title: The hidden cost of bad context and how to fix it
  • Main point: Show how poor context wastes tokens and time; demo Repo Prompt flows.
  • Core argument: Exact file selection yields better results with fewer retries.
  • Quotes: “Sometimes you really really want to control the context.”
  1. Title: Testing, but make it chat: MCP-powered front-end debugging
  • Main point: Run headless browsers from Cursor for screenshots and flows.
  • Core argument: Close the loop between code, UI state, and fixes.
  • Quotes: “Controlling a headless browser in the cloud from Cursor.”
  1. Title: PM inside your IDE: PRDs and tasks that engineers love
  • Main point: Markdown PRDs and task lists reduce PM overhead and speed handoff.
  • Core argument: Clear, close-to-code docs beat heavy tools in small teams.
  • Quotes: “Even if you just did the PRD + task list part… it’s a timesaver.”
  1. Title: Picking your model: o3, Claude 3.7, or Gemini 2.5?
  • Main point: How to choose and when to switch; cost vs. visibility vs. stability.
  • Core argument: Learn one model deeply and keep a fallback.
  • Quotes: “I probably spend maybe $300–$400 a month… worth it.”

Watch Video