Digestify

How 80,000 companies build with AI: Products as organisms and the death of org charts | Asha Sharma

Microsoft AI products

Educational summary of How 80,000 companies build with AI: Products as organisms and the death of org charts | Asha Sharma hosted in YouTube. All rights belong to the original creator. Contact me for any copyright concerns.

Youtube URL: https://youtu.be/J9UWaltU-7Q

Host(s): Lenny Rachitsky

Guest(s): Asha Sharma, CVP of Product, Microsoft AI Platform

Podcast Overview and Key Segments

Overall Summary

Lenny speaks with Asha Sharma about how AI is changing product, org design, and execution. Asha explains the shift from “product as artifact” to “product as organism.” She argues that the loop of data, rewards, and tuning becomes the core IP. She shares why post-training and reinforcement learning will outspend pre-training, how model diversity beats one-model strategies, and why GUIs give way to code-native, composable interfaces. The conversation explores the rise of agents, the coming “agentic society,” and how org charts will flatten into “work charts.” Asha outlines what great AI builders do differently: full-stack polymaths, fast loops, strong evals, and platform-minded thinking. She offers a planning model built on “seasons,” not rigid roadmaps. She also shares a key leadership lesson from Satya Nadella: optimism is a renewable resource. Real examples include GitHub Copilot, medical dictation (Dragon), incident summarization, and large-scale agent deployments.

Reference

  • Product as organism: Products that learn and improve via continuous data, feedback, and tuning, not static releases.
  • Post-training: Adapting a base model using fine-tuning, reinforcement learning, and feedback loops to meet target outcomes.
  • Pre-training: The initial large-scale training of a foundation model on vast data.
  • Reinforcement learning (RL): Training with rewards to steer a model toward desired behavior.
  • Agent / Agentic society: Software that can plan, tool-call, and act with autonomy across many tasks at scale.
  • Code-native interface: Text and code-first interaction models (IDEs, terminals, APIs) over point-and-click GUIs.
  • Composability: Building blocks that plug together across systems, not one-off screens.
  • Model system / Ensemble: Using multiple models optimized for different tasks in one solution.
  • Tool calling: LLMs invoking tools, functions, or APIs to act.
  • Evals: Tests and metrics to measure model and agent quality over time.
  • Observability: Tracing behavior, errors, and performance end-to-end.
  • Embedded vs embodied agents: Embedded agents live in software; embodied agents include robots or physical systems.

Key Topics

Product as Organism

Asha describes a major shift. Products are no longer static artifacts. They now “think and live and learn.” The core KPI becomes the product team’s “metabolism.” Can it ingest data, refine rewards, and tune models to improve outcomes? Post-training loops, reward design, and AB tests matter more than one-off launches. Proprietary data, synthetic data, and expert labels fuel these loops. This makes the loop itself the IP. Companies that master signal capture, evals, and rapid iteration build compounding advantage. The product evolves as users interact with it. Over time, it adapts to price, performance, and quality goals. This mindset reshapes how teams plan, build, and measure progress.

From GUIs to Code-Native Interfaces

The interface trend mirrors past shifts. Desktops moved to SQL; cloud consoles gave way to Terraform. AI is moving from GUIs to code-native, text-stream interfaces. LLMs work best with structured text and code. This favors composability over canvas. Builders should think less about pixels, more about how agents compose, read, and scale. Chat remains powerful but is not enough. Many users will interact through code, IDEs, or embedded flows. Agents will also code for each other. The winners will design for composition and collaboration at scale, not one-off screens.

Post-Training, RL, and the New AI Stack

Asha argues post-training is the new pre-training. Once models hit a certain size, it is more efficient to adapt them than to train from scratch. Fine-tuning, RL, and feedback loops steer models toward business goals. She expects spending on post-training to rival or beat pre-training. Companies should invest in data pipelines, reward design, expert labeling, and evals. Examples show dramatic gains when moving from synthetic fine-tuning to expert-annotated data. This layer becomes the strategic frontier. It also sparks new infra and platforms focused on tuning, evaluation, and continuous improvement.

Agents and the “Work Chart”

We are early in an agentic society. As the marginal cost of good output falls, demand for output rises. Agents scale this. Embedded agents will be everywhere in software. Some will be embodied. As agents handle more tasks, org charts flatten. The “org chart becomes the work chart.” Work routes to the right agents and people. Review and governance rise in importance. Observability, fine-tuning, and self-healing loops become core. Companies will need strong alignment, accountability, and evaluation to trust autonomous flows at scale.

Planning in Seasons, Not Rigid Roadmaps

AI moves too fast for static plans. Asha’s teams plan in “seasons.” A season reflects a secular shift, like “rise of agents.” This sets shared context: what is changing, what winning looks like, and the north star. Teams then set loose quarterly OKRs and run 4–6 week squad goals. They leave slack for slope, not just unplanned work. This balances direction with agility. It also reduces rebuild waste when a new model drops. The goal is to ride the curve, not freeze in a snapshot.

The New Builder: Full-Stack Polymath

Traditional orgs need 500+ touch points to ship. AI’s pace makes that too slow. Full-stack builders are having a renaissance. They move across PM, design, and engineering. They own the loop, not the lane. They understand costs, rewards, UX, and system design end-to-end. Many AI-first startups work this way. Now, large enterprises are adapting too. This model raises throughput and speeds learning. It also strengthens the product metabolism needed for “organism” products.

Platform Lessons: The Power of Invisible Work

What wins platforms often hides under the hood. Asha shares lessons from Porch, WhatsApp, Instacart, and Microsoft. Matching engines, phonebook graphs, reliability, privacy, and inventory freshness beat flashy features. At platform scale, data residency, availability, and selection matter most. The same applies to AI platforms. Customers want trust, stability, and choice. The platform’s job is to deliver the boring, hard parts well. Then teams can build durable advantage on top.

Real-World Examples and Impact

  • GitHub Copilot: Ensembles, fine-tuning across languages, and next-edit suggestions run on continuous loops.
  • Dragon for physicians: Moving from synthetic to expert-annotated data raised acceptance rates to ~83%.
  • DevOps incident response: Live-call summarization helps leaders track root cause and progress in real time.
  • Agent deployments: Microsoft Azure customers have built millions of agents; 15,000+ customers are live with agents today.

Key Themes

Agents Everywhere, Governance by Design

Agents will run many tasks behind the scenes. Some will be user-facing; others will be infra-level. As they scale, governance should not be an afterthought. Build observability, robust evals, and self-healing. Use clear routing and review patterns. Embed reward design and tuning workflows. Quote(s):

  • “We’re just starting to scratch the surface of what an agentic society actually looks like.”
  • “When that happens, the org chart starts to become the work chart.”

Post-Training > Pre-Training for Business Value

Training giant models is costly and specialized. Adapting good base models is faster and cheaper. Fine-tune with your data. Add RL for outcomes you care about. Build strong evals to avoid regressions. Expect post-training to draw more spend than pre-training over time. Quote(s):

  • “Post-training is the new pre-training.”
  • “You get more leverage if you optimize what’s off the shelf for price, performance, or quality.”

Code-Native and Composable > GUI-First

LLMs thrive on text streams. Code-native workflows fit this better than GUI-heavy designs. Think components, not pages. Design for agent readability, not only human clicks. Expect IDE, APIs, and terminals to matter more. Chat is key, but not enough. Quote(s):

  • “Future products are about composability, not the canvas.”
  • “A stream of text just connects better with LLMs.”

Plan in Seasons; Build for the Slope

Static, six-month plans break in fast-moving AI markets. Name the season. Align on the secular change and north star. Use quarterly OKRs and 4–6 week squad goals. Leave slack to surf new breakthroughs. Quote(s):

  • “We think about it as what season are we in?”
  • “Build for the slope instead of the snapshot.”

Loop, Not Lane

Cross-functional loops beat rigid lanes. Full-stack builders speed the cycle. The winning IP is the loop: data capture, reward design, tuning, evals, and rollouts. Measures shift from shipping artifacts to improving metabolic rate. Quote(s):

  • “It’s all about the loop, not the lane.”
  • “Products that think and live and learn.”

Leadership and Culture

Optimism scales teams and ambition. Platform work is the long game. Reliability, privacy, and availability win trust. Quote(s):

  • “Optimism is a renewable resource.”
  • “It wasn’t the hundreds of features. It was the infrastructure and platform.”

Key Actionable Advise

Key Problem

Scattered AI pilots with no business impact

  • Solution
    • Pick 1–2 existing processes with clear KPIs (e.g., support, fraud).
  • How to Implement
    • Map the process. Add AI to a single step. Measure. Iterate. Scale step-by-step.
  • Risks to be aware of
    • Tool sprawl. Weak evals. No P&L tie-in.

Key Problem

Roadmap chaos in a fast-moving model landscape

  • Solution
    • Plan in seasons with loose quarterly OKRs and 4–6 week squad goals.
  • How to Implement
    • Define the season, secular change, and north star. Leave slack for the slope.
  • Risks to be aware of
    • Overplanning. Ignoring shifts. No shared context.

Key Problem

Underperforming models in production

  • Solution
    • Shift spend to post-training, RL, and evals.
  • How to Implement
    • Build data pipelines. Invest in expert labels. Design rewards. Run AB tests.
  • Risks to be aware of
    • Feedback loops without guardrails. Overfitting synthetic data.

Key Problem

Slow delivery due to function silos

  • Solution
    • Empower full-stack builders and small squads.
  • How to Implement
    • Reduce handoffs. Give teams end-to-end ownership of the loop.
  • Risks to be aware of
    • Governance gaps. Lack of shared standards.

Key Problem

Over-reliance on a single model

  • Solution
    • Use a model system. Match models to tasks.
  • How to Implement
    • Select models by latency, quality, and cost. Swap as needs change.
  • Risks to be aware of
    • Integration complexity. Poor routing. Vendor lock-in.

Key Problem

GUI-first design blocks scale

  • Solution
    • Move to code-native, composable interfaces.
  • How to Implement
    • Favor APIs, text streams, and IDE flows. Design for agent readability.
  • Risks to be aware of
    • User confusion if you remove needed visual affordances.

Key Problem

Uncontrolled agent behavior

  • Solution
    • Build strong observability, governance, and review.
  • How to Implement
    • Add tracing, metrics, and evals. Use tool access policies and automatic rollbacks.
  • Risks to be aware of
    • Silent failures. Compliance risks. Drift without detection.

Noteworthy Observations and Unique Perspective

  • The org chart becomes the work chart as agents scale task throughput.
    • Quote: “You just don’t need as many layers.”
  • The loop is the product and the IP.
    • Quote: “Feedback becomes continuous and observability becomes the culture.”
  • Model diversity wins in practice.
    • Quote: “I’m in the model system camp, not one model to rule them all.”
  • Planning by seasons is a pragmatic answer to AI velocity.
    • Quote: “Build for the slope instead of the snapshot.”
  • Leadership is energy management.
    • Quote: “Optimism is a renewable resource.”

Companies, Tool and Entities Mentioned

Microsoft, OpenAI, GitHub, GitHub Copilot, Nuance Dragon (medical), Instacart, Meta (Messenger, Instagram Direct, Messenger Kids), Home Depot, Coupang, Cursor, Sierra, Anthropic, Synthesia, Stanford Medicine, Interpret, DX (getdx), Finn (finn.ai), Canva, Notion, Perplexity, Strava, Hinge, Linear, Dropbox, Booking.com, Adyen, Intercom, Replit, Superhuman, Descript, Warp, Granola, Magic Patterns, Raycast, WhisperFlow, Gamma, Shephard, Mobbin

Linkedin Ideas

  1. Title: Product as Organism: Why Your Loop Is Your IP
  • Main point: The core advantage now is your feedback and tuning loop, not features.
  • Core argument: Post-training, reward design, and evals drive compounding gains.
  • Key quotes: “Products that think and live and learn.” “It’s all about the loop, not the lane.”
  1. Title: Plan by Seasons, Not Sprints
  • Main point: In AI, rigid roadmaps fail. Seasons align teams to secular shifts.
  • Core argument: Define the season, the north star, and leave slack for the slope.
  • Key quotes: “We think about it as what season are we in?” “Build for the slope instead of the snapshot.”
  1. Title: The Death of the Org Chart
  • Main point: Agents turn hierarchies into work charts. Tasks route to the best unit.
  • Core argument: As agents scale, you need governance, evals, and self-healing loops.
  • Key quotes: “The org chart starts to become the work chart.” “You just don’t need as many layers.”
  1. Title: Post-Training Is the New Pre-Training
  • Main point: Adapting base models beats training your own for most companies.
  • Core argument: Fine-tuning, RL, and expert labels yield faster ROI.
  • Key quotes: “You get more leverage if you optimize what’s off the shelf.” “Post-training is the new pre-training.”
  1. Title: Code-Native Interfaces Will Beat GUIs in AI
  • Main point: LLMs thrive on text and code. Design for composability over canvas.
  • Core argument: IDEs, APIs, and agent readability will matter more than screens.
  • Key quotes: “Future products are about composability, not the canvas.”

Blog Ideas

  1. Title: From Artifact to Organism: The New Laws of Product in the AI Era
  • Main point: Why product metabolism, not feature counts, defines winners.
  • Core argument: The loop—data, reward, evals—is now your IP and moat.
  • Key quotes: “Products that think and live and learn.” “It’s all about the loop, not the lane.”
  1. Title: Seasons, Not Roadmaps: A Planning System for AI Velocity
  • Main point: A practical framework to align teams amid rapid model shifts.
  • Core argument: Define the season, set quarterly OKRs, run 4–6 week goals, leave slack.
  • Key quotes: “Build for the slope instead of the snapshot.”
  1. Title: Why Post-Training Will Outspend Pre-Training
  • Main point: The economics of adapting models and the rise of RL platforms.
  • Core argument: Fine-tuning + RL + expert data drive better results and faster value.
  • Key quotes: “Post-training is the new pre-training.”
  1. Title: The Agentic Enterprise: From Org Charts to Work Charts
  • Main point: How agents reshape org design, governance, and productivity.
  • Core argument: Embedded agents at scale need observability, evals, routing, and review.
  • Key quotes: “The org chart starts to become the work chart.”
  1. Title: Code-Native Is Coming: Designing for Composability in AI Products
  • Main point: Why code-first, text-stream interfaces will outperform GUI-first designs.
  • Core argument: LLMs bond with text; agents must read, compose, and scale across systems.
  • Key quotes: “A stream of text just connects better with LLMs.”

Watch Video