CONVERSION RATE ACADEMY

Running Multiple Tests: Traffic Allocation and Overlap

As you get better, you’ll want to run multiple experiments at once. This class teaches you how to do that without corrupting results or creating conflicting experiences for users.

Core rule: Don’t run overlapping tests that affect the same users on the same page area at the same time. If two tests change the same decision point, you won’t know which caused the outcome.

Safe ways to run multiple tests:

  • Different pages: homepage test + checkout test (safe).
  • Different segments: new visitors vs returning (if your setup supports it).
  • Sequential testing: finish test A, then run test B informed by the result.
  • Traffic allocation: reduce traffic per test so each still reaches sample size.

Practical approach:

  1. Maintain a calendar of active tests.
  2. Ensure each test has its own goal and doesn’t collide with another test’s changes.
  3. Use consistent naming and documentation so learnings don’t get lost.

Using Segments to Run More Tests Safely

Segmentation isn’t just for targeting – it’s one of the best tools for running multiple tests without interference. If you’ve completed the previous class on Segmenting Users for Tests, you already know how to split audiences by device, source, and visitor type. Here’s how that applies when you’re juggling multiple experiments:

Segment-based parallel testing:

  • Mobile vs. desktop: run a completely different test on each device. Mobile users see a checkout layout test while desktop users see a pricing page copy test. Zero overlap, clean data.
  • Paid vs. organic traffic: test ad-specific landing page variations for paid visitors while running a separate homepage hero test for organic visitors. Each audience gets one experiment.
  • New vs. returning visitors: test onboarding messaging for first-time visitors while testing a loyalty offer for returning visitors. Different intent, different tests, no collision.

Why this works: when each segment only sees one test, there’s no interaction effect. You get clean results from both tests simultaneously, and you can run twice as many experiments in the same time period.

Watch out for:

  • Sample size per segment. Splitting traffic into segments means each test gets fewer visitors. Make sure each segment has enough volume to reach significance in a reasonable timeframe.
  • Don’t over-slice. Running a mobile-only, Instagram-only, new-visitor-only test sounds precise, but you might be left with 50 visitors a week. Keep segments broad enough to be testable.
  • Document which segments are in which tests. If you’re running three segmented tests at once, a simple spreadsheet or calendar showing “mobile = checkout test, desktop paid = pricing test, desktop organic = hero test” prevents confusion.

The power move: combine segmentation with sequential testing. Run a broad test first (all traffic), find the winner, then run segment-specific follow-up tests to optimize further for mobile, paid, or returning visitors. This is how mature CRO programs compound gains.

AB Split Test workflow: Use AB Split Test’s traffic allocation controls and audience targeting to manage exposure and avoid interference. Keep tests isolated by page, by segment, or by clearly separated components. Use reporting + AI summaries to document outcomes and roll learnings into the backlog.