CONVERSION RATE ACADEMY

Crafting a Full CRO Process: Diagnose, Prescribe, Try

Winning at CRO isn’t about one magical test – it’s about a repeatable process you can run every week or month. This class gives you the complete operating system: Diagnose → Prescribe → Try. Master this loop and results compound over time.

The Three Phases

Every CRO cycle follows the same structure. The difference between teams that get results and teams that spin their wheels is how consistently they run the loop – not how clever any single test is.

  1. Diagnose what’s stopping customers.
  2. Define the problem precisely.
  3. Propose a solution with a clear prediction.
  4. Measure whether reality agrees.

A hypothesis is the bridge between diagnosis and a test. Without it, you’re just shipping changes and hoping.

Phase 1: Diagnose

Find the most expensive problem using evidence. Your job is to answer: where are we losing the most money, and why?

Pull from three layers of evidence:

  • Quantitative: analytics, funnels, conversion rates, traffic by device/source. Look for the biggest drop-offs and the highest-value pages with the worst performance.
  • Behavioral: heatmaps, scroll depth, rage clicks, dead clicks, session replays. Watch what users actually do – not what you assume they do.
  • Qualitative: chat logs, surveys, support tickets, frontline staff feedback. Listen for objections, confusion, and missing information in the user’s own words.

Diagnosis output: A short list of the top 1–3 conversion blockers, each backed by at least one piece of evidence. If you can’t point to evidence, it’s a guess – not a diagnosis.

Diagnosis Example

Imagine you run an e-commerce store selling premium kitchen knives. Analytics shows the pricing page has a 68% bounce rate on mobile. Session replays reveal users scrolling up and down between the product specs and the price, then leaving. Chat logs show three variations of: “What’s included in the set?” and “Is this dishwasher safe?”

Diagnosis: Mobile visitors on the pricing page can’t quickly confirm what they’re getting or whether it fits their needs. Key purchase information is missing or buried.

Phase 2: Prescribe – Crafting Testable Hypotheses

Turn the diagnosis into a clear plan. This is where you go from “we found a problem” to “here’s exactly what we’re going to test and why.”

Most tests fail for one of two reasons:

  • The team tested an idea, not a problem.
  • The team measured something that didn’t represent success (clicks instead of revenue, engagement instead of completion).

A good hypothesis prevents both mistakes by forcing clarity:

  • What is broken?
  • Why is it broken?
  • What change should fix it?
  • What metric proves it?

The “Winner” Hypothesis Format

Use this structure. It’s simple, but it enforces discipline:

Because we observed [PROBLEM] in [EVIDENCE], we believe [CAUSE].
If we change [SOLUTION], then [PRIMARY METRIC] will improve by [EXPECTED DIRECTION], because [MECHANISM].

That looks long, but it prevents vague, untestable fluff.

Example (Good)

Because session replays show users repeatedly editing the phone field and then abandoning the form, we believe the phone field creates perceived risk (sales calls) and effort.
If we make the phone field optional and add a “no spam / no calls” note, then form completion rate will increase, because we reduce perceived risk at the moment of commitment.

Example (Bad)

“Change the form design to make it cleaner.”

Weak hypotheses produce weak learning. Even if they “win,” you won’t know why.

What a Good Hypothesis Must Include

Every hypothesis needs five things:

  1. Target
    The page/step and the segment (e.g., mobile checkout, returning visitors).
  2. Problem
    The friction point described in plain language.
  3. Cause
    The reason you believe it’s happening (risk, confusion, trust gap, mismatch).
  4. Change
    The specific variation you will ship.
  5. Success Metric
    The number that proves it worked (and ideally a guardrail metric too).

If any of these are missing, you don’t have a hypothesis.

Start Hypotheses With Evidence, Not Opinions

Your AB Split Test toolkit is the fuel for high-confidence hypotheses:

  • Heatmaps: ignored CTAs, dead clicks, “looks clickable” problems
  • Session replays: hesitation, loops, rage clicks, abandonment moments
  • Analytics: highest-value pages, leak points, device-specific drops
  • Live chat logs / surveys: repeated objections and confusion in the user’s words
  • AI CRO chat: fast clustering of patterns into plausible causes to investigate

A hypothesis is not a hunch. It’s a claim grounded in observable behavior.

Pick Metrics That Represent Business Success

Most teams over-measure the wrong things. Use:

  • Primary metric: the thing the page exists to do (purchase completion, lead submit, trial start, revenue per visitor)
  • Guardrails: metrics that should not get worse (refund rate, churn, average order value, downstream activation)

Clicks are not success unless clicks are the goal.

Make a Prediction (Even a Rough One)

Winning teams don’t just say “it will help.” They say:

  • “This will increase checkout completion.”
  • “This will reduce drop-off at step 2.”
  • “This should lift revenue per visitor.”

It doesn’t need to be perfect, just a direction and a reason.

Prescription Example

Continuing the kitchen knife example:

Hypothesis: Because session replays show mobile users scrolling between specs and price (and chat logs repeatedly ask “what’s included?”), we believe key purchase information is too hard to find on mobile. If we add a “What’s in the box” summary and a care instructions note directly below the price on mobile, then mobile pricing page conversion rate will increase, because we reduce uncertainty at the decision point.

Test backlog:

  1. Add “What’s in the box” + care FAQ below price on mobile (high impact, low effort) – test first
  2. Rewrite product description to lead with benefits instead of specs (medium impact, low effort)
  3. Add comparison table vs. competitors (medium impact, medium effort)

Phase 3: Try

Run the test, learn, and iterate. This is where discipline matters most – because the temptation to peek early, stop too soon, or declare a winner on gut feeling is strongest here.

  • Launch clean tests with clear goals. One primary metric. One or two guardrails. No ambiguity about what “winning” looks like.
  • Run long enough to avoid false positives. Pre-commit to a minimum sample size and duration before you launch. Don’t stop because it “looks good” on day three.
  • Ship winners, document learnings, queue the next test. A losing test is not a failure – it’s information. Write down what you learned and feed it back into the next Diagnose phase.

Try Example

You build the “What’s in the box” variation and run it at 50/50 against the current mobile pricing page. After two weeks and 4,200 visitors:

  • Variation B converts at 4.1% vs. control at 2.9%
  • Confidence: 96%
  • Average order value unchanged (guardrail passed)

Decision: Ship the winner. Document that “missing product clarity on mobile” was a real blocker. Next cycle: test the benefit-led product description (backlog item #2).

Running the Loop: Weekly and Monthly Cadence

The power of this process is in repetition, not in any single test. Here’s how to keep it running:

Weekly (30 minutes):

  • Check running tests – are they on track for sample size?
  • Review any new qualitative signals (chat logs, support tickets, survey responses).
  • Add new ideas to the backlog if evidence supports them.

Monthly (1–2 hours):

  • Close completed tests and document results.
  • Re-diagnose: has the biggest problem changed?
  • Prescribe the next 1–3 tests.
  • Share a simple report: what we tested, what we learned, what’s next.

Common Mistakes That Break the Loop

  • Skipping Diagnose: jumping straight to “let’s test a new headline” without evidence. You’ll waste cycles on low-impact changes.
  • Over-prescribing: writing 20 hypotheses and never launching any. Pick the top 1–2 and ship.
  • Stopping tests early: declaring winners before reaching statistical significance. This is the fastest way to ship false positives.
  • Not documenting learnings: if you don’t write down what you learned, you’ll re-test the same ideas six months later.
  • Treating losing tests as failures: a test that doesn’t win still tells you something. The only failure is not learning from it.

AB Split Test Workflow

AB Split Test supports the full loop:

  • Diagnose with heatmaps, session replays, and analytics signals.
  • Prescribe with AI CRO chat suggestions and prioritization – paste your evidence and ask it to rank test ideas by impact.
  • Try with fast element or full-page tests, clear goal tracking, and built-in reporting.

Your job is to run the loop consistently. The results compound over time – each cycle makes the next one smarter, because you’re building on real evidence instead of starting from scratch.