CLASSES ↑ beginner to advanced ↓
CONVERSION RATE ACADEMY
Segmenting Users for Tests: Device, Source, and Audience
Not all visitors behave the same way. A mobile user on Instagram has different intent, patience, and context than a desktop user from Google Search. This class shows how to segment your test audience so you’re measuring the right behavior – and not drowning real signals in irrelevant noise.
Why Segmentation Matters for Testing
A test that shows “no winner” across all traffic might actually have a clear winner for mobile users and a clear loser for desktop – but when you average them together, the signals cancel out.
Segmenting lets you:
- Find wins hiding inside “flat” results – a variation might convert 20% better on mobile but look neutral overall.
- Avoid shipping changes that hurt a key audience – what works for ad traffic may confuse organic visitors.
- Run more targeted experiments – test a mobile-specific layout only for mobile users, without affecting desktop.
- Get to significance faster – smaller, more homogeneous audiences produce cleaner data.
Segment by Device Size
Device is the most common and highest-impact segment. Mobile and desktop users often have fundamentally different behaviors on the same page.
Common device-based test scenarios:
- Mobile-only layout test: your checkout converts well on desktop but drops off on mobile. Test a simplified mobile checkout without changing the desktop experience.
- Desktop-only content test: desktop users read long-form content; mobile users scroll past it. Test a shorter version on mobile while keeping the full version on desktop.
- Tablet considerations: tablet traffic is usually too small to segment on its own. Group it with desktop or exclude it – don’t let a tiny sample pollute your results.
When to segment by device:
- Heatmaps or session replays show different behavior patterns on mobile vs. desktop.
- Conversion rates differ significantly between devices (check your analytics first).
- The change you’re testing is layout or UI-driven and only affects one screen size.
Segment by Traffic Source
Where visitors come from shapes what they expect when they land. Someone clicking a Facebook ad has different intent than someone who Googled your brand name.
Key source segments:
- Paid ads (Google, Meta, TikTok, etc.): these visitors saw a specific promise in the ad. The landing page needs to deliver on that promise immediately. Test headline/hero alignment with ad copy.
- Social (Instagram, Facebook, X, LinkedIn): often browsing, lower intent, shorter attention span. Test shorter pages, stronger hooks, and social proof near the top.
- Organic search: higher intent – they searched for something specific. Test content that answers their query directly and reduces steps to conversion.
- Email / returning visitors: already familiar with your brand. Test offers, urgency, and deeper product content rather than introductory messaging.
- Referral / affiliate: came with a recommendation. Test trust reinforcement (reviews, guarantees) since they’re pre-sold but may need reassurance.
When to segment by source:
- You’re running paid campaigns and want to optimize the landing page specifically for ad traffic.
- Analytics shows wildly different conversion rates by source on the same page.
- You suspect your messaging works for one audience but not another.
Segment by New vs. Returning Visitors
First-time visitors and returning visitors are in completely different mental states:
- New visitors need to understand what you do, why they should trust you, and what to do next. Test clarity, proof, and onboarding.
- Returning visitors already know you. Test shortcuts to conversion – saved carts, personalized recommendations, “welcome back” messaging, or direct CTAs.
Practical Rules for Segmented Tests
- Check your traffic volume first. Segmenting cuts your sample size. If you only get 500 mobile visitors a month, a mobile-only test will take a long time to reach significance. Make sure the segment is large enough to test.
- Segment before you launch, not after. Deciding to “look at mobile only” after a test ends is data mining – you’ll find patterns that aren’t real. Choose your segment upfront as part of your hypothesis.
- One segment per test. Don’t try to segment by device AND source AND new/returning all at once. Pick the one that matters most for your hypothesis.
- Use your diagnosis to pick the segment. If heatmaps show mobile users struggling but desktop users converting fine, the segment is obvious. Let the evidence decide.
- Always have a guardrail for the excluded segment. If you’re testing mobile-only, still monitor desktop to make sure nothing broke.
AB Split Test Workflow
When setting up a test in AB Split Test, use audience targeting to restrict the test to a specific segment – device type, referrer, or visitor status. This keeps your data clean and your results actionable. Combine with heatmaps and session replays filtered to the same segment to validate that the behavior you diagnosed is real for that specific audience. Use the AI CRO chat to help interpret segmented results: “This test won on mobile but lost on desktop – what might explain the difference?”