How Long Does a Split Test Take?
The short answer: at least one week, often longer for lower traffic sites.
The longer answer depends on your traffic, your conversion rate, and how AB Split Test decides when it has seen enough data to call a winner.
Why tests need time
People behave differently day to day. Someone visiting on a Monday morning is in a different headspace than someone browsing on a Sunday evening. A test that only runs for a day or two can produce results that reflect that particular slice of behavior rather than a true pattern.
Running a test for at least a full week captures a complete cycle of visitor behavior across different days, times, and moods. That gives your results a much better chance of reflecting what your audience actually does, rather than what a specific group of visitors happened to do on a Tuesday.
How AB Split Test decides when to call a winner
AB Split Test uses a Bayesian statistics engine to monitor your test in real time. Rather than waiting for a fixed sample size, it continuously calculates the likelihood that each variation is the true winner based on the data collected so far.
When a variation reaches over 95% likelihood of being the winner AND the test has run for at least one week, AB Split Test will declare a winner.
As of v2.5.1, there is one additional requirement: each variation must have received at least 50 visits before a winner can be declared. If your test has not yet reached that threshold, you will see an Underpowered badge on your results. This is intentional. Declaring a winner on 12 visits would not be meaningful, and the badge tells you exactly why the test is still running.
What the Underpowered badge means
The Underpowered badge was introduced in v2.5.1. It appears when your test does not yet have enough visits per variation to produce a reliable result.
It does not mean something is wrong. It means the test needs more data. Keep it running.
If your test has been running for several weeks and the badge is still showing, it usually means one of the following:
Your conversion goal fires rarely. A scroll depth or time active goal will collect data much faster than a purchase or form submission on a low traffic page.
Your traffic percentage is limited. If you set the test to only show to 10% of visitors, it will take roughly 10 times longer to collect the same number of visits. Go to your test settings and check the traffic allocation.
Your page does not get much traffic. This is normal. Low traffic sites simply need more calendar time. The test is still valid, it just needs patience.
Multi-Armed Bandit: a different approach for ongoing optimization
If you are running a test that you want to keep live indefinitely rather than waiting for a single winner, the Multi-Armed Bandit mode shifts traffic progressively toward whichever variation is performing better, while keeping a small portion of traffic exploring the other variations.
This mode does not declare a winner and stop. It keeps optimizing continuously. It is available on the Ultimate plan and needs to be enabled first under Settings, then Advanced Settings, then Enable Dynamic Traffic Optimization.
How to speed up your results
You cannot add traffic that does not exist, but there are a few things that can help:
Use a higher-funnel conversion goal. If your test is tracking purchases and you only get 20 orders a month, consider switching to a scroll depth or time active goal to collect data faster. You can still track purchases as a secondary subgoal.
Set traffic allocation to 100%. If you limited the test to a portion of visitors, open it up to everyone.
Run the test on a higher-traffic page. If you are testing a headline on a page that gets 50 visits a month, consider whether a different page would give you cleaner data faster.