Phase 3 of 6

Run /visionaire:start

The market question gets answered eventually. The only variable is when — before you build, or after.

Validate whether the precisely-defined problem has real, accessible demand — before design or architecture begins. Evidence-required. Speculation prohibited. Verdict in an afternoon.

Start my spec session — $297
$297 early access — first 100 builders  ·  Becomes $447 after
See how it works  →
One payment. Keep it forever. Runs locally. No telemetry.
03
Market Validation
MARKET-analysis.md
Demand before design. Evidence before commitment.
MARKET-analysis.md
UIUX.md
ARCHITECTURE.md
1# MARKET-analysis.md — ScopeGuard
2
3## Market Sizing
4TAM: $4.2B — project scope & contract mgmt software (2025)
5SAM: $610M — creative agencies, 5–50 seats, English-speaking
6SOM: $18M — scope creep tooling, Y1 realistic capture ~$340K ARR
7Growth: 14.3% CAGR. Remote work normalization driving demand.
8
9## Competitor Analysis
10Teamwork.com — broad PM tool, no scope-specific audit trail
11Scoro — financial focus, gap: no client-facing change approval
12Function Point — agency-specific, dated UX, no spec linking
13Gap identified: none track changes against original spec docs.
14
15## Demand Signals
16"scope creep" — 12,400/mo searches, CPC $4.20 (commercial)
17r/freelance: 340+ posts in 90 days mentioning scope disputes
18LinkedIn: 2,100 "scope manager" job postings in Q1 2025
193 customer interviews: avg 4.1 hrs/week lost to scope disputes
20
21## Verdict
22VIABLE · PROCEED
23Condition: position as "scope audit trail," not PM replacement.
24
25✓ Market size sufficient for SaaS unit economics — PASS
26✓ Differentiation gap confirmed — PASS
27✓ Demand signals across 3+ independent sources — PASS
28✓ Acquisition channel viable (search + community) — PASS
29✓ Business model benchmarked to category — PASS
30→ MARKET-analysis.md created · UI/UX Phase ready
If you skip this phase
  • You will talk to people who already agree with you and call it validation.
  • You will interpret interest as intent because you want it to be real.
  • You will not know the market is dead until you try to sell — not when you should have known.

None of this is theoretical. These are the patterns that show up every time.

The market question gets answered eventually. The only variable is when.

The Product Brief answers whether the problem is clear. Market Validation answers whether solving it can support a business. Skip it, and you answer that question at launch — when changing course costs weeks of rework, not an afternoon of research.

The market that's too small
Eight months building a project management tool for independent consultants. Launch day: 127 signups. You run the numbers. Addressable market: roughly 12,000 people globally who fit your profile. At 2% penetration, that's 240 customers — $6,960 MRR. Not enough to cover one engineer's salary. The product works. The market can't sustain a business.
The competitors you never found
Four months into building your "unique" CRM for real estate agents, you search your category. Page 1: Salesforce Real Estate Edition, Follow Up Boss, LionDesk, PropertyBase, BoomTown. All venture-backed. All entrenched. All with enterprise sales teams. The underserved niche — photographers, stagers, virtual tour operators — was right there. You're $150K deep with no competitive moat.
Building at the wrong moment
2018: you build a remote collaboration tool for distributed teams. Nobody cares — companies still believe real work happens in offices. You shut down after burning $300K. 2020: COVID hits. Zoom goes from $600M to $4B revenue. Your idea was right. Your timing was two years early. Timing research can't guarantee perfect entry — but it can prevent building five years before the market exists.
Acquisition costs that collapse your model
Your model assumes $10 CAC based on "organic growth." Six months post-launch: 83 users. Paid ads: Facebook CPM $47, conversion rate 1.2%. Actual CAC: $178. At $29/month, customers need to stay 6+ months to break even. Competitor research — which you skipped — would have shown industry CAC averages $120–150. You priced for a fantasy, not the category.
Assumptions carried forward blindly
You assumed users would pay $79/month because "you're worth it." Launch reveals: category standard is $19–29/month. Users have been trained by incumbents to expect those prices. Your feature set is comparable. Your price is 3x higher. They sign up for trials, see the gap, churn. It's not a messaging problem. It's a pricing assumption you never tested.

In every case the information existed — publicly available, researchable, findable. The cost of discovery was an afternoon. What was actually paid was months of work.

From untested assumptions to evidence-backed conviction — before design begins.

The Product Brief defined what you're building. Market Validation tests whether the world wants it. The agent reads your documents, researches the space, and returns with findings your gut can't replicate.

Before Market Validation
market probably exists competitors seem weak timing feels right people will pay $29/month sounds fair organic growth everyone has this problem
MARKET-analysis.md
TAM/SAM/SOM — $4.2B → $610M → $18M serviceable
Gap confirmed — 3 incumbents, none link changes to specs
Demand signals — 12,400 searches/mo, 340 forum posts, 3 interviews
CAC benchmark — category avg $85–120, search viable
Verdict — VIABLE · PROCEED on scope-audit positioning
Verdict — GO · Founder reviews and proceeds to UI/UX

What the research is required to do — and what it's not allowed to do.

Evidence required. Speculation prohibited. Every number cited. Every gap named. These aren't preferences — they're enforced by the output format.

01
Comprehension before research
The agent reads IDEA.md and BRIEF.md and documents what it's validating — target market, problem, key assumptions — before running a single search. If the comprehension is wrong, you see it immediately. Wasted research doesn't happen.
02
Evidence required, speculation prohibited
Every claim has a source. Every number has attribution. You will never see "probably around" or "seems moderate." If data doesn't exist, the absence is noted — not filled with inference. You see what was found and what wasn't.
03
Your assumptions get tested
The beliefs you formed in the Product Brief — about market size, customer behavior, acquisition channels, willingness to pay — are extracted and tested against market evidence. Validated assumptions stay. Challenged ones get flagged with the contradicting data.
04
Instant visual assessment
The decision scorecard rates each of five dimensions one to five stars with a one-sentence evidence summary. You see the overall signal in seconds. "STRONG opportunity (4.0/5.0)" before deciding how deeply to read the full analysis behind it.
05
Market-specific success factors
Every recommendation ties to a specific finding from this market — not generic startup advice. "Position as X" is preceded by "research found competitors don't Y." Positioning, go-to-market, and competitive advantages grounded in what actually worked in this space.
06
Top three failure modes only
Not a 40-item risk register that creates paralysis. The three most probable failures, ranked by likelihood, each with the research evidence, early warning signs to monitor, and concrete mitigation steps. Focus, not noise.
07
Validation experiments included
Three cheap, fast experiments — each runnable this week, each under $100 — to test the assumptions desk research can't answer. Bridges the analysis to customer validation, which is ultimately what determines whether to build.
08
Honest about what it can't tell you
The analysis ends with an explicit section: "What This Analysis Cannot Tell You." Desk research shows whether a market exists. It cannot tell you whether customers will pay you specifically. You leave knowing exactly what still needs answering — and what requires talking to real people.
The research exists. You just haven't run it yet.

Founders who skip this don't save time. They borrow it from launch day.

19% of startup failures cite "getting outcompeted." Most didn't know their competitors existed until after launch. The information was publicly available. It just wasn't checked.

Start my spec session — $297
$297 early access — first 100 builders  ·  Becomes $447 after

Market to document in 4 steps.

The agent runs autonomously. You don't provide data, conduct interviews, or guide the research — you run the command and review what comes back.

01

Comprehension and assumption extraction

The agent reads IDEA.md and BRIEF.md thoroughly, documents its understanding of target market, problem, and user — then extracts all market-related assumptions from those documents. These become the research targets. You see what's being validated before research begins.

02

Five-dimension web research

The agent runs autonomous web research across five dimensions: market size (TAM/SAM/SOM), competitive landscape, timing and catalysts, acquisition channel feasibility, and business model benchmarking. Multiple queries per dimension. Every source cited. No guesses.

03

Synthesis and scorecard

Findings are synthesized into a decision scorecard — five dimensions rated one to five stars with a one-sentence evidence summary per dimension — then expanded into detailed research sections, success factors, the three most probable failure modes, and three validation experiments to run next.

04

Verdict and founder decision

GO, RECONSIDER, or NO-GO — with the evidence chain behind it. The recommendation is the signal. You review the report and decide: proceed to UI/UX, refine the Product Brief, or return to ideation. The decision always belongs to the founder. Market Validation informs; it does not override.

The Decision

Market Validation does not have an autonomous stage gate. The recommendation — GO, RECONSIDER, or NO-GO — is the signal. The founder reviews the report and decides whether to proceed to UI/UX, refine the Product Brief, or return to ideation. The decision always belongs to the founder. The analysis does not make it for them.

What happens when the research comes back with bad news — and what to do with it.

A RECONSIDER verdict isn't a stop sign. Here's how to read it — and what the next move is.

Scenario 01

The analysis returns RECONSIDER — the niche is crowded

You ran the Product Brief on ScopeGuard and it passed — clear problem, real user, measurable success criteria. You run Market Validation expecting GO. The scorecard comes back 3.2/5.0. Competition: 3 stars. "Eight direct competitors in agency project management. None specifically address scope audit trail, but the space is noisy and customer switching costs are high."

The RECONSIDER verdict includes a specific path: position as a standalone scope audit layer that integrates with existing PM tools, not a replacement. The differentiation gap exists — but the framing matters. The verdict names the exact repositioning that changes the competitive picture.

What you walk away with

A clear repositioning direction — "scope audit trail as integration, not PM replacement" — grounded in competitor gap analysis, before a single UI screen is designed around the wrong positioning.

Scenario 02

The gate fails on acquisition — the channel math doesn't work

Your Product Brief assumed organic community growth as the primary acquisition channel. Market Validation research finds r/freelance and r/agency communities are active — but moderators restrict commercial content, paid partnerships are expensive, and the average post gets 200 impressions. The acquisition gate fails: "No viable low-cost channel identified for reaching 5–50 seat agencies at scale."

The gate failure isn't a stop sign — it's a specification. The returned findings name two channels the research surfaced as underexplored: agency-focused podcasts with direct sponsor relationships, and integration partnerships with existing PM tools. These become the validation experiments in the next section of the analysis.

What you walk away with

A specific acquisition hypothesis to test — podcast sponsorship and integration partnerships — before you've built the product those channels would need to convert. Two weeks of outreach, not eight months of guessing.

Scenario 03

The analysis returns GO — you proceed with documented evidence

Market Validation returns 4.3/5.0. Market size is sufficient. Three incumbents, none covering the scope audit trail gap. Demand signals across four independent sources. Search acquisition viable at estimated $80–120 CAC against a $49/month product. Business model benchmarked to category — subscription proven. Verdict: GO.

You proceed to UI/UX. But now you're not designing with instinct — you're designing with evidence. The MARKET-analysis.md document tells your designer exactly who's searching ("scope creep agency"), what competitors look like, and why the positioning works. The UI/UX phase executes on research, not assumption.

What you walk away with

A GO verdict with the evidence chain behind it — market size, competitive gaps, demand signals, acquisition model, and business model benchmarks — that informs every UI/UX and architecture decision that follows.

Phase 3 of six. The demand test before commitment.

The Product Brief established what you're building and for whom. Market Validation asks whether building it makes sense. Architecture and UI/UX come after — once the answer is yes.

Phase 01
Idea
IDEA.md
Phase 02
Product Brief
BRIEF.md
Phase 03
Market Validation
You are here
Phase 04
UI/UX Design
UIUX.md
Phase 05
Architecture
ARCHITECTURE.md
Phase 06
Features
F-001...F-N

Market Validation is optional. The recommendation — GO, RECONSIDER, or NO-GO — is the gate signal. The founder reviews and decides. Design begins when the founder says it does.

You can skip this and move straight to UI/UX.

Teams do it. The tradeoffs are the same every time.

Without Market Validation

Faster start, unpredictable cost

  • Market size discovered at launch — after 6–12 months of work
  • Competitors found mid-development — when pivoting costs weeks
  • Timing assessed retroactively — once traction stalls
  • Acquisition costs unknown until burn is already underway
  • Pricing assumptions tested at launch against real customer churn
  • Product Brief assumptions carried forward blindly into architecture
With Market Validation

Slower start, compounding clarity

  • Market size known before a line of design or code is written
  • Competitive landscape mapped — gaps and risks documented
  • Timing assessed — catalysts identified, risks named
  • Acquisition channels evaluated for feasibility and unit economics
  • Business model benchmarked against category — pricing grounded
  • Assumptions tested against evidence — not carried forward blindly
The Reframe

Market Validation doesn't guarantee success. Nothing does. What it prevents is spending 6–12 months building something the market demonstrably doesn't support. The research takes an afternoon. Building takes months. The question isn't whether to do the research. It's whether to do it before you build, or after.

Two principles that shaped every decision in this phase.

"The goal of a startup is to figure out the right thing to build — the thing customers want and will pay for — as quickly as possible. The fundamental activity of a startup is to turn ideas into products, measure how customers respond, and then learn whether to pivot or persevere."

— Eric Ries · The Lean Startup
Market Validation is the "measure" step — run before you build rather than after. Secondary research into market size, competitive landscape, timing, and acquisition feasibility is the fastest version of Ries's build-measure-learn loop: you're measuring market signals before you've invested months in development. The loop doesn't require a shipped product to start. It requires asking the right questions early.

"Most people think it's their job to have the right answers. I think it's my job to ask the right questions. The answers are almost always already out there."

— Michael Siebel · Y Combinator Partner, Group Partner lecture notes
Market data is almost always publicly available — industry reports, competitor pricing pages, job postings, search volume, forum threads, community discussions. The question isn't whether the information exists. It's whether someone asks for it before committing. Market Validation encodes the discipline of asking before commitment — not as an optional research phase, but as a gate before design begins.

I built Market Validation after watching smart founders go deep into architecture on ideas the market didn't support. The research existed. It just wasn't run first. Every hour spent on market analysis before building is worth ten hours of rework after the wrong thing ships. This phase enforces that priority mechanically — so it doesn't depend on discipline in the moment.

— Robert Evans

The market question gets answered eventually.

Finding a crowded or dead market during a 20-minute research session costs nothing. Finding it 6 months into a build costs everything. This phase is how you answer that question first.

$297 < 1 customer discovery sprint · saves months of building for no one
Start my spec session — $297
$297 early access — first 100 builders  ·  Becomes $447 after
One payment. Yours forever. Runs locally in Claude Code. 30-day guarantee.