/visionaire:startThe market question gets answered eventually. The only variable is when — before you build, or after.
Validate whether the precisely-defined problem has real, accessible demand — before design or architecture begins. Evidence-required. Speculation prohibited. Verdict in an afternoon.
None of this is theoretical. These are the patterns that show up every time.
The Product Brief answers whether the problem is clear. Market Validation answers whether solving it can support a business. Skip it, and you answer that question at launch — when changing course costs weeks of rework, not an afternoon of research.
In every case the information existed — publicly available, researchable, findable. The cost of discovery was an afternoon. What was actually paid was months of work.
The Product Brief defined what you're building. Market Validation tests whether the world wants it. The agent reads your documents, researches the space, and returns with findings your gut can't replicate.
Evidence required. Speculation prohibited. Every number cited. Every gap named. These aren't preferences — they're enforced by the output format.
19% of startup failures cite "getting outcompeted." Most didn't know their competitors existed until after launch. The information was publicly available. It just wasn't checked.
Start my spec session — $297The agent runs autonomously. You don't provide data, conduct interviews, or guide the research — you run the command and review what comes back.
The agent reads IDEA.md and BRIEF.md thoroughly, documents its understanding of target market, problem, and user — then extracts all market-related assumptions from those documents. These become the research targets. You see what's being validated before research begins.
The agent runs autonomous web research across five dimensions: market size (TAM/SAM/SOM), competitive landscape, timing and catalysts, acquisition channel feasibility, and business model benchmarking. Multiple queries per dimension. Every source cited. No guesses.
Findings are synthesized into a decision scorecard — five dimensions rated one to five stars with a one-sentence evidence summary per dimension — then expanded into detailed research sections, success factors, the three most probable failure modes, and three validation experiments to run next.
GO, RECONSIDER, or NO-GO — with the evidence chain behind it. The recommendation is the signal. You review the report and decide: proceed to UI/UX, refine the Product Brief, or return to ideation. The decision always belongs to the founder. Market Validation informs; it does not override.
Market Validation does not have an autonomous stage gate. The recommendation — GO, RECONSIDER, or NO-GO — is the signal. The founder reviews the report and decides whether to proceed to UI/UX, refine the Product Brief, or return to ideation. The decision always belongs to the founder. The analysis does not make it for them.
A RECONSIDER verdict isn't a stop sign. Here's how to read it — and what the next move is.
You ran the Product Brief on ScopeGuard and it passed — clear problem, real user, measurable success criteria. You run Market Validation expecting GO. The scorecard comes back 3.2/5.0. Competition: 3 stars. "Eight direct competitors in agency project management. None specifically address scope audit trail, but the space is noisy and customer switching costs are high."
The RECONSIDER verdict includes a specific path: position as a standalone scope audit layer that integrates with existing PM tools, not a replacement. The differentiation gap exists — but the framing matters. The verdict names the exact repositioning that changes the competitive picture.
A clear repositioning direction — "scope audit trail as integration, not PM replacement" — grounded in competitor gap analysis, before a single UI screen is designed around the wrong positioning.
Your Product Brief assumed organic community growth as the primary acquisition channel. Market Validation research finds r/freelance and r/agency communities are active — but moderators restrict commercial content, paid partnerships are expensive, and the average post gets 200 impressions. The acquisition gate fails: "No viable low-cost channel identified for reaching 5–50 seat agencies at scale."
The gate failure isn't a stop sign — it's a specification. The returned findings name two channels the research surfaced as underexplored: agency-focused podcasts with direct sponsor relationships, and integration partnerships with existing PM tools. These become the validation experiments in the next section of the analysis.
A specific acquisition hypothesis to test — podcast sponsorship and integration partnerships — before you've built the product those channels would need to convert. Two weeks of outreach, not eight months of guessing.
Market Validation returns 4.3/5.0. Market size is sufficient. Three incumbents, none covering the scope audit trail gap. Demand signals across four independent sources. Search acquisition viable at estimated $80–120 CAC against a $49/month product. Business model benchmarked to category — subscription proven. Verdict: GO.
You proceed to UI/UX. But now you're not designing with instinct — you're designing with evidence. The MARKET-analysis.md document tells your designer exactly who's searching ("scope creep agency"), what competitors look like, and why the positioning works. The UI/UX phase executes on research, not assumption.
A GO verdict with the evidence chain behind it — market size, competitive gaps, demand signals, acquisition model, and business model benchmarks — that informs every UI/UX and architecture decision that follows.
The Product Brief established what you're building and for whom. Market Validation asks whether building it makes sense. Architecture and UI/UX come after — once the answer is yes.
Market Validation is optional. The recommendation — GO, RECONSIDER, or NO-GO — is the gate signal. The founder reviews and decides. Design begins when the founder says it does.
Teams do it. The tradeoffs are the same every time.
Market Validation doesn't guarantee success. Nothing does. What it prevents is spending 6–12 months building something the market demonstrably doesn't support. The research takes an afternoon. Building takes months. The question isn't whether to do the research. It's whether to do it before you build, or after.
I built Market Validation after watching smart founders go deep into architecture on ideas the market didn't support. The research existed. It just wasn't run first. Every hour spent on market analysis before building is worth ten hours of rework after the wrong thing ships. This phase enforces that priority mechanically — so it doesn't depend on discipline in the moment.
— Robert EvansFinding a crowded or dead market during a 20-minute research session costs nothing. Finding it 6 months into a build costs everything. This phase is how you answer that question first.