| Validation example | Assumption tested | Cost | Time required | Signal strength | Failure mode | Best next step |
| Problem discovery interviews | A real customer segment has a painful, frequent problem. | Low | Days to weeks, depending on buyer access | Medium when patterns repeat across similar buyers. | Leading questions, pitching too early, interviewing friends. | Tighten the customer segment and problem statement; use how to validate a product idea [https://dowhatmatter.com/guides/how-to-validate-a-product-idea] for the next test sequence. |
| Market validation survey | The problem, current alternatives, and buying criteria are common enough to investigate further. | Low | Days to weeks, depending on sample access | Low to medium, depending on respondent quality. | Hypothetical purchase questions, weak sample, vague audience. | Use the answers to recruit interviews or refine positioning; pull questions from market validation survey questions [https://dowhatmatter.com/guides/market-validation-survey-questions]. |
| Competitor and alternative analysis | Customers already spend time, money, or workflow effort solving the problem. | Low | A few focused research sessions | Medium when alternatives are expensive, disliked, or manual. | Assuming competitors prove your exact product demand. | Identify the wedge: cheaper, faster, narrower, more integrated, or easier to adopt. |
| Landing page smoke test | A specific audience responds to a clear promise before the product is built. | Low to medium | Days to weeks, depending on traffic source | Medium when traffic is targeted and the ask is meaningful. | Untargeted traffic, unclear offer, treating email capture as purchase intent. | Improve the promise or run a stronger startup smoke test [https://dowhatmatter.com/guides/smoke-test-startup] with a sharper action. |
| Fake-door test | Users click, request, or attempt to access a feature or offer before it exists. | Low | Varies with traffic and product surface
| Medium when the intent action is close to real usage. | Damaging trust, testing curiosity instead of intent. | Compare feature demand across options, then validate the strongest option with clearer messaging or a manual follow-up. |
| Concierge or manual pilot | A buyer will use the outcome even if delivery is manual behind the scenes. | Medium | Varies with workflow complexity | High when users rely on the result repeatedly. | Founder over-service hides poor economics or weak product pull. | Document repeated workflows, objections, and value moments before building software. |
| Paid pilot or pre-order
| A buyer will make a real commitment before the full product exists. | Medium | Varies with sales cycle and approval path | High because money, procurement effort, or calendar commitment creates friction. | Discounted curiosity, custom work disguised as product demand. | Define pilot success criteria and decide whether to build, narrow, or stop. |
| Pricing conversation with target buyers | The value is strong enough to support a plausible business model. | Low | Days to weeks, depending on buyer access | Medium when buyers discuss budget, alternatives, and approval paths concretely. | Asking "would you pay?" instead of anchoring to a real offer. | Test packaging, buyer role, and willingness to commit in a pilot. |
| Channel demand test | A repeatable channel can reach the right buyers at acceptable effort or cost. | Low to medium | Days to weeks, depending on channel and audience | Medium when outreach or traffic converts from a defined audience. | Confusing channel novelty with durable demand. | Keep the message that works and test whether it repeats with a second batch. |