All Posts
Private EquityAI StrategyProduct Rebuild

Playbook

The Rebuild Playbook

Your Portfolio Company’s Product Is About to Be Replaced. The Only Question Is Whether You Do It — Or a Competitor Does.

83% of acquirers are paying higher multiples for AI-native targets. Only 26% of last year’s acquisition targets qualified. The firms that close this gap in the next 3–5 months will set the exit narrative.

Published byLightCI|April 2026

01 — The Case for Rebuild

The market has already split. The valuation data is unambiguous.

Three things happened in the last twelve months that made “wait and see” the most expensive AI strategy in private equity.

0%

of acquirers paid higher multiples for AI-native targets in 2025

0pt

gap between pilot success and measurable business outcomes

0%+

of agentic AI projects will be canceled by end of 2027 — Gartner

01

Buyers are paying a premium — and expecting AI-native by default.

83% of active acquirers paid higher multiples for AI-native or AI-integrated targets in 2025. 86% expect those premiums to persist through 2026. But only 26% of last year’s targets actually qualified as genuinely AI-driven. That’s a 35-point gap between what buyers want and what the market offers.

02

PE firms are building AI-native competitors from scratch.

Bain Capital, Blackstone, and Vanguard backed Norm Law — built from the ground up as an AI-native legal platform, not a retrofit. If you’re not rebuilding your own portfolio companies, someone else’s portfolio company is being built to replace yours.

03

Bolting AI onto legacy architecture doesn’t work.

80% of generative AI use cases met or exceeded expectations in pilots. Only 23% produced measurable revenue or cost outcomes. Legacy systems were designed for predictable, stateless transactions. Agentic AI requires multi-turn adaptive interactions and end-to-end coordination.

The valuation math: AI-native platforms command 25–30x EV/Revenue. The broader SaaS median sits at 3.4x. AI-relevant categories trade at 6.3–6.9x vs. 4.8x for peers. A 1–3x multiple premium on the same revenue base can mean tens of millions in additional enterprise value.

Patching isn’t a strategy. It’s a way to spend money proving that your architecture can’t do what the market now requires.

02 — Rebuild vs. Everything Else

Not every portfolio company needs a rebuild.

But the ones that do can’t afford to get a Deploy instead. Getting this wrong is expensive in both directions.

Deploy

Distribute horizontal AI tools across the org — copilots, summarization, support assist.

Timeline

Weeks

Owner

IT / Ops

Exit impact

Cost savings. Efficiency narrative.

The test

Would removing AI reduce internal productivity?

Reshape

Redesign pricing, workflows, and go-to-market around AI capabilities.

Timeline

Months

Owner

Product + Commercial

Exit impact

Revenue model evolution. Margin expansion.

The test

Would removing AI change the business model?

Rebuild

Replace the application layer with AI-native architecture. New interaction model. New product.

Timeline

3–5 months

Owner

CTO/CPO + PE Ops Partner

Exit impact

Valuation premium. Category repositioning.

The test

Would removing the AI model collapse the differentiated outcome entirely?

The three signals that a portfolio company needs a Rebuild:

1

AI-native competitors are emerging

If a startup or PE-backed competitor is building from scratch in your market, the competitive window is measured in months, not years.

2

Dashboard-driven interaction model

Users navigate screens and click buttons. AI-native products flip this: agents detect, act, and notify. The user responds to outcomes rather than hunting for them.

3

Moat depends on product differentiation

If the moat is purely technical superiority, and someone can rebuild it with a modern stack in 4 months, the urgency is existential. Keep the data, keep the customers, rebuild the product.

When NOT to rebuild:

  • The data layer is also broken. If core data infrastructure is fragmented across disconnected systems, fix that first. A rebuild on broken data just moves the failure point.
  • No competitive urgency. If the category has no AI-native entrants, targeted AI integration may be sufficient. Don’t over-engineer the intervention.
  • Less than 18 months to exit. A rebuild needs 3–5 months for the product and 6–12 months to demonstrate traction. Focus on operational AI quick wins that improve the efficiency narrative instead.

03 — The Rebuild, Phase by Phase

From decision to exit-ready in 20 weeks.

0

The Rebuild Decision

Week 0

Confirm the rebuild is warranted, secure commitment, and set the rules of engagement — before anyone writes a line of code.

  • Apply the diagnostic test: “If you removed the AI model, would the differentiated outcome collapse?” If no, this is a feature addition, not a rebuild.
  • Map the competitive landscape: Are AI-native competitors emerging? What’s their timeline to market?
  • Assess the data layer independently from the application layer. Keep strong data layers; rebuild the application layer AI-native on top.
  • Confirm exit timeline allows for rebuild (3–5 months) + traction validation (6–12 months). Minimum 18-month runway.
  • Secure dual ownership: a named owner at the PE sponsor level and at the portfolio company level — someone whose comp is tied to this shipping.
  • Set the budget envelope. Benchmarks: enterprise analytics rebuild, 3 months / $300K. Enterprise spend management rebuild, 5 months / $425K.

Kill Criteria \u2014 Do Not Proceed If:

  • The diagnostic test returns “no” — you’re adding features, not rebuilding.
  • Data layer requires >8 weeks of remediation — re-scope as a phased investment.
  • Exit window is <18 months — the rebuild won’t have time to show validated traction.
  • No single owner at the portfolio company is willing to stake their role on this — the project will decay by week 6.
1

Extract & Scope

Weeks 1\u20133

Document what the current product actually does, define the new AI-native interaction model, and produce an architecture scope a build team can execute against.

Week 1: Business Logic Extraction

  • Catalog every core workflow — what users actually do, not what the feature list says. Distinguish high-value from legacy.
  • Document the business logic layer: rules, calculations, integrations, data transformations. This is the IP that transfers.
  • Map the data layer: schemas, access patterns, integration points, data quality scores.
  • Interview top 10 customers. What do they actually use? What do they wish the product did autonomously?

Weeks 2–3: Architecture & Migration Scoping

  • Define the new interaction model: dashboards that wait for clicks → agents that detect, diagnose, act, and notify.
  • Scope the AI-native architecture stack: LLM gateway, retrieval layer (RAG), coordination layer, evaluation & monitoring, runtime guardrails.
  • Build the customer migration plan: feature parity map, parallel-run strategy, early adopter cohort (5–10 customers), sunset timeline.
  • Design the new pricing model. 65% of SaaS vendors already incorporate usage-based models. AI-native products priced above $250/mo retain at 70% GRR / 85% NRR. Below $50/mo, retention craters to 23% GRR.
  • Embed per-feature, per-customer cost tracking into the architecture from day one. Non-negotiable. A 5-point gross margin decline = 25% valuation decrease at constant multiples.

Red Flags

  • Business logic extraction reveals >200 distinct workflows — ruthlessly prioritize. Ship the top 20% in v1.
  • Customer interviews reveal value is primarily data/reporting, not workflow — reconsider whether this is a Rebuild or targeted AI integration.
  • Data layer assessment reveals quality issues that will poison agent outputs — add a data remediation sprint before Phase 2.

Deliverables at end of Phase 1:

Business logic document
AI-native architecture scope with cost estimates
Customer migration plan with early adopter commitments
Pricing model recommendation
Board-ready “go/no-go” brief
2

Build the AI-Native Core

Weeks 4\u201310

Ship a working AI-native product covering the highest-value workflows, with cost governance and evaluation infrastructure baked in from the start.

Weeks 4–6: Foundation & First Agents

  • Stand up core infrastructure: LLM gateway, retrieval layer, coordination layer, evaluation framework, cost tracking.
  • Build the first 2–3 agent workflows — the highest-value, highest-frequency workflows from Phase 1.
  • Implement AgentOps from day one: prompt versioning, drift detection, rollback mechanisms, observability dashboards.
  • Set up continuous evaluation loops — automated testing against golden datasets, regression detection, accuracy scoring.
  • Begin inference cost monitoring per feature, per customer. Establish unit economics baselines.

Weeks 7–8: Expand & Harden

  • Expand to the next tier of agent workflows (5–8 total covering the core product experience).
  • Integrate human-in-the-loop escalation paths — agents should know when they’re uncertain and route to humans.
  • Stress test against real customer data. Agent quality on synthetic data ≠ agent quality on messy production data.
  • Validate the new pricing model with early adopter cohort.
  • Conduct first security and compliance review: data residency, IP ownership, vendor training restrictions.

Weeks 9–10: Polish & Prepare for Migration

  • Complete v1 feature scope — all high-value workflows covered, evaluation passing, cost per interaction within budget.
  • Finalize early adopter migration package: onboarding flow, documentation, support escalation.
  • Build monitoring dashboard tracking metrics buyers care about: adoption rate, agent accuracy, cost per customer, gross margin impact.
  • Prepare the first cohort migration — technical migration path, data transfer, parallel-run activation.
  • Deliver board update: product demo, unit economics model, migration timeline, competitive positioning.

Red Flags

  • Agent accuracy on production data is significantly lower than test data — retrieval layer or data quality needs work. Do not migrate customers onto unreliable agents.
  • Inference costs per interaction are 2x+ budget — model routing, caching, and prompt optimization can typically reduce costs 40–60%.
  • Early adopter feedback is “this is cool” but no one can articulate what’s better — the agent needs to do something genuinely impossible before, not just faster.
  • Team is building features instead of agents — scope creep. The test remains: if you remove the AI model, does the outcome collapse?

Build Benchmarks (what good looks like at Week 10):

MetricTarget
Agent workflows live5–8 covering core product experience
Agent accuracy on production data>90% on primary workflows
Inference cost per customer/monthWithin 15% of budget model
Evaluation coverageAutomated tests on 100% of agent workflows
Early adopter NPS>50 (they’d be disappointed if you took it away)
3

Migrate & Validate

Weeks 11\u201316

Move real customers from legacy to AI-native. Collect the data that proves the rebuild was worth it — in numbers a buyer would underwrite.

Weeks 11–13: Early Adopter Migration

  • Migrate early adopter cohort (5–10 customers). Run legacy in parallel — track how often they fall back.
  • Monitor daily: adoption rate, agent accuracy, fallback frequency, support ticket volume, time-to-resolution.
  • Collect structured feedback weekly: what’s better, what’s worse, what’s missing.
  • Track revenue signals: are early adopters willing to pay the new pricing model?
  • Validate gross margin stability: is inference cost tracking working? Are per-customer economics in line?

Weeks 14–16: Cohort Expansion

  • Based on early adopter data, decide: expand to next cohort (25–50 customers) or iterate first?
  • Segment next cohort by use case complexity and data quality. Migrate cleanest, highest-value segments first.
  • Build the customer success playbook for migration — onboarding, first-week experience, common friction points.
  • Run retention analysis: compare early adopter cohort retention vs. legacy baseline.
  • Update unit economics model with real data: cost per customer, gross margin per customer, LTV projection.

Red Flags

  • Early adopters fall back to legacy >20% of the time — product isn’t ready for broader migration. Fix the specific workflows driving fallback.
  • Churn in early adopter cohort exceeds legacy baseline — stop expansion. Diagnose product quality vs. change resistance vs. pricing.
  • Gross margins deteriorating as usage scales — cost governance isn’t working. Implement tighter model routing, caching, and per-feature cost caps.
  • Support ticket volume spikes — product is shifting support burden, not eliminating it. Agents need better guardrails.

Migration Decision Tree:

?

Early adopter fallback rate <20%?

YES → Retention ≥ legacy baseline?

YES → Gross margin within 2pts? → EXPAND to next cohort

NO → Optimize costs, then expand

NO → Diagnose retention drivers, iterate, re-measure

NO (fallback ≥20%) → Iterate on product, do not expand

4

Position for Exit

Weeks 16\u201320

Package everything into an exit narrative that commands the AI-native premium. This isn’t marketing — it’s assembling the evidence that makes a buyer pay 6x instead of 4x.

  • Finalize customer migration timeline: when does legacy go read-only? Buyers want to see a plan, not an indefinite parallel-run.
  • Present the pricing model transition as part of the exit narrative: “We moved from $X per seat to $Y per outcome, growing revenue Z% while reducing customer count dependency.”
  • Package unit economics for buyer consumption: inference cost per customer per feature, gross margin trajectory, cost-to-serve comparison.
  • Demonstrate governance maturity: NIST AI RMF alignment, ISO/IEC 42001 readiness, audit trails, incident response plan.

Build the AI moat narrative — four things that make this defensible:

Proprietary data loops

The product generates data that improves agent performance, which attracts more usage, which generates more data.

Domain-specific models

Fine-tuned models or a retrieval layer that encodes deep domain knowledge. This is IP.

Workflow IP

Agent workflows encode business logic that took years to learn. In the new architecture, this is code, not tribal knowledge.

Switching costs

Customers build processes around agent outputs. Integrations, reporting, team habits. The deeper it embeds, the harder to rip out.

Exit-ready metrics — the four dimensions buyers evaluate:

Commercial traction

% of revenue on AI-native product, retention rates, NRR, customer expansion data

Unit economics

Inference cost per customer, gross margin stability, LTV/CAC on new pricing model

Defensibility

Proprietary data loops, model fine-tuning, workflow IP, measured switching costs

Governance & risk

NIST/ISO alignment, audit trails, incident response, vendor risk controls

Premium case

AI-native SaaS commands 6.3–6.9x EV/TTM revenue vs. 4.8x for traditional peers. At $20M ARR, that spread is $30–42M in additional enterprise value. Add a credible moat narrative and you’re in premium territory.

Discount case

50% of SaaS CEOs believe incumbency protects them. Only 20% of buyers agree. The firms that don’t rebuild will discover this gap at the negotiating table.

04 — Cost Governance

The margin trap that kills exits.

0pt

gross margin decline

= 0%

valuation decrease at constant multiples

Inference costs are real, variable, and scale with usage. Unlike traditional SaaS — where marginal cost per customer approaches zero — AI-native products carry meaningful per-interaction costs. Without per-feature, per-customer cost tracking from day one, AI features quietly destroy the gross margins your exit narrative depends on.

1

Track at the right granularity

Not total AI spend. Per feature. Per customer. Per interaction. Know that Customer A’s workflow costs $0.12 per run and Customer B’s costs $0.47 — and why.

2

Build model routing into the architecture

Not every task needs GPT-4. Route simple classification to small models, complex reasoning to large models, and cache everything. Typical cost reduction: 40–60%.

3

Set cost ceilings per feature

Define the maximum acceptable cost per interaction before shipping any agent workflow. If it can’t operate within that ceiling, optimize or reconsider.

4

Tie pricing to cost

Usage-based pricing must account for variable inference costs. A feature that costs $0.50 per use and is priced at $0.30 will erode margin on every transaction.

5

Report margins monthly — to the board

Gross margin trajectory should be a standing board agenda item. Not buried in a CFO appendix. On the first page.

05 — The Bottom Line

The acquirers have already decided what they’re willing to pay for. The question is whether your portfolio company qualifies.

83% of buyers are paying more for AI-native. Only 26% of targets qualify. Gartner says 40% of the agentic AI projects trying to close that gap will be canceled because they bolted onto legacy instead of rebuilding. The firms that rebuild — keep the data, keep the customers, replace the application layer AI-native on top — will be the 26% that becomes the 61%.

The rebuild isn’t a technology project. It’s the single highest-leverage value creation move available in a PE-backed software portfolio right now.

3–5 months. $300K–$425K investment.
$30–42M in additional enterprise value at $20M ARR.

Ready to move

Start with the diagnostic.

Is your portfolio company a Deploy, Reshape, or Rebuild? In two weeks, you’ll have the answer — and a scoped plan for what comes next.

Talk to LightCI

Sources & References

[1]

Bain & Company, “Why Agentic AI Demands a New Architecture” (2026)

[2]

BCG, “Inside the AI-First Private Equity Firm” (January 2026)

[3]

Bloomberg Law, “AI-Native Firms Built by Private Equity Will Strain Legacy Model” (2026)

[4]

ChartMogul, “The SaaS Retention Report: The AI Churn Wave” (2026)

[5]

CNBC, “Private Equity Is About to Eat Its Own Software Portfolio” (March 2026)

[6]

Development Corporate, “The AI Valuation Gap: SaaS M&A Buyers Are Paying AI Premiums” (2026)

[7]

EY, “SaaS Transformation with GenAI: Outcome-Based Pricing” (2026)

[8]

Gartner, “Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027” (2025)

[9]

IDC, Software Vendor Pricing Model Projections (2026)

[10]

LightCI, “AI in Private Equity” (2026)

[11]

PYMNTS, “AI Moves SaaS Subscriptions to Consumption” (2026)

[12]

Software Equity Group, “AI Impact on SaaS Valuations” (2026)